Question
stringlengths
0
222
Description
stringlengths
0
790
Answer
stringlengths
0
28.2k
Link
stringlengths
35
92
Why doesn't my Amazon S3 event notification invoke my Lambda function?
"I configured an Amazon Simple Storage Service (Amazon S3) event notification to invoke my AWS Lambda function. However, the function doesn't invoke when the Amazon S3 event occurs."
"I configured an Amazon Simple Storage Service (Amazon S3) event notification to invoke my AWS Lambda function. However, the function doesn't invoke when the Amazon S3 event occurs.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Confirm that your Amazon S3 event type is configured correctlyWhen you configure an Amazon S3 event notification, you must specify which supported Amazon S3 event types cause Amazon S3 to send the notification. If an event type that you didn't specify occurs in your Amazon S3 bucket, then Amazon S3 doesn't send the notification.For example, an Amazon S3 event notification is configured to invoke Lambda with the s3:ObjectCreated:Put event. If you upload a large file, the file is uploaded using multipart upload. The s3:ObjectCreated:CompleteMultipartUpload event must be selected with the s3:ObjectCreated:Put event. You can also use the s3:ObjectCreated:* event type to request notifications for any API that was used to create an object.Confirm that your object key name filters include the uploaded file nameIf your event notifications are configured to use object key name filtering, notifications are published only for objects with specific prefixes or suffixes. A wildcard character ("*") can't be used in filters as a prefix or suffix to represent any character. Make sure that the prefix or suffix filters specified in the event notification includes the uploaded object key name.Confirm that your object key name filters are in URL-encoded (percent-encoded) formatIf your event notifications are configured to use object key name filtering, notifications are published only for objects with specific prefixes or suffixes.If you use any of the following special characters in your prefixes or suffixes, you must enter them in URL-encoded (percent-encoded) format:Parenthesis ("( )")ASCII character ranges 00–1F hex (0–31 decimal) and 7F (127 decimal)Dollar ("$")Ampersand ("&")Plus sign ("+")Comma (",")Colon (":")Semicolon (";")Equals sign ("=")Question mark ("?")At sign ("@")Space (" ")For example, to define the value of a prefix as "test=abc/", enter "test%3Dabc/" for its value.Note: A wildcard character ("*") can't be used in filters as a prefix or suffix to represent any character.For more information, see Object key naming guidelines.Confirm that your Lambda function's AWS Identity and Access Management (IAM) policy has the required permissionsCheck your Lambda function's resource-based policy to confirm that it allows your Amazon S3 bucket to invoke the function. If it doesn't, add the required permissions by following the instructions in Granting function access to AWS services.For more information, see AWS Lambda permissions.Note: When you add a new event notification using the Amazon S3 console, the required permissions are added to your function's policy automatically. If you use the put-bucket-notification-configuration action in the AWS CLI to add an event notification, your function's policy isn't updated automatically.Confirm that your Lambda function is configured to handle concurrent invocations from Amazon S3 event notificationsYour Lambda function must be configured to handle concurrent invocations from Amazon S3 event notifications. If invocation requests arrive faster than your function can scale, or your function is at maximum concurrency, then Lambda throttles the requests.For more information, see Asynchronous invocation and Lambda function scaling.Related informationHow do I troubleshoot issues with invoking a Lambda function with an Amazon S3 event notification using Systems Manager Automation?Using AWS Lambda with Amazon S3 eventsWalkthrough: Configuring a bucket for notifications (SNS topic or SQS queue)Tutorial: Using an Amazon S3 trigger to invoke a Lambda functionWhy do I get the error "Unable to validate the following destination configurations" when creating an Amazon S3 event notification to invoke my Lambda function?Follow"
https://repost.aws/knowledge-center/lambda-configure-s3-event-notification
How do I set up notifications for my Direct Connect scheduled maintenance or events?
I want to receive notifications for AWS Direct Connect scheduled maintenance or events.
"I want to receive notifications for AWS Direct Connect scheduled maintenance or events.Short descriptionTo help you manage events, the AWS Personal Health Dashboard displays relevant information and provides notifications for activities. You can use the Personal Health Dashboard with the following steps to receive notifications for scheduled maintenance or events that affect Direct Connect.ResolutionNote: The following steps deploy an AWS CloudFormation stack that sends an automated email notification when an event is posted in the Personal Health Dashboard.Set up an email notificationFrom the AWS CloudFormation console, choose Create Stack.Select With new resources (standard).In Prerequisite - Prepare template, select the radio button Template is ready.In Specify template, select the radio button Amazon S3 URL, enter https://s3.amazonaws.com/aws-health-tools-assets/cloudformation-templates/DX_Notifier.json, and then choose Next.In stack name, enter a name for the stack.In EmailAddress, enter the email address to subscribe to the Amazon Simple Notification Service (Amazon SNS) topic, and then choose Next.(Optional) Add Tags. For more information, see Resource tag.(Optional) In permissions, select IAM role name or IAM role ARN, and then choose Next. For more information, see IAM identifiers.Choose Next and review the configuration.Select the I acknowledge that AWS CloudFormation might create IAM resources agreement, and then choose Submit.(Optional) Include additional email addresses to receive the SNS notificationsIf you want to add additional email addresses for SNS notifications, then subscribe to the topic by completing the following steps:From the Amazon SNS console, in the navigation pane, choose Topics.Choose DXMaintNotify, Actions, and then choose Subscribe to topic.In Create subscription, choose Protocol, and then choose Email.In Endpoint, type the email address, and then choose Create Subscription.You can download custom notifications for AWS Health events for other services in AWS Health Tools. For more information about Amazon SNS, see What is Amazon SNS?Related informationTroubleshooting AWS Direct ConnectHow do I set an Active/Passive Direct Connect connection to AWS?What Is AWS Health?Follow"
https://repost.aws/knowledge-center/direct-connect-notifications
How do I resolve query timeout errors when I import data from Athena to QuickSight SPICE?
Sometimes I encounter a query timeout error when I import data from Amazon Athena to Amazon QuickSight SPICE. How do I resolve the error?
"Sometimes I encounter a query timeout error when I import data from Amazon Athena to Amazon QuickSight SPICE. How do I resolve the error?ResolutionYou receive the following error:[Simba][AthenaJDBC](100071) An error has been thrown from the AWS Athena client. Query timeoutIncrease the query runtime for Amazon AthenaWhen you import data from Athena to QuickSight SPICE, you can receive query timeout errors due to the DML query reaching its maximum runtime.To resolve this issue:Check your Athena query history to find the query that QuickSight generated.Note how long the query ran before it failed.If the amount of time is close to the maximum DML query timeout quota (in minutes), then increase the service quota.For more information on AWS Service Quotas and requesting a quota increase, see AWS Service Quotas.Reduce the amount of time to run the query from AthenaThe following are steps that you can take in Athena to reduce the query runtime:Use partition projection to divide your table into parts and keep the related data together.Compress files, or split them if you can. For more information on supported compression formats, see Athena compression support.Optimize the size of your files.If you're importing an entire table, consider using a custom SQL query.Related informationHow can I explicitly specify the size of the files to be split or the number of files?Top 10 performance tuning tips for Amazon AthenaFollow"
https://repost.aws/knowledge-center/quicksight-query-timeout-athena-spice
How do I troubleshoot errors with Elastic IP addresses in Amazon VPC?
I received an "InvalidAddress.NotFound" or "AddressLimitExceeded" error when working with Elastic IP addresses in Amazon Virtual Private Cloud (Amazon VPC). How can I troubleshoot this?
"I received an "InvalidAddress.NotFound" or "AddressLimitExceeded" error when working with Elastic IP addresses in Amazon Virtual Private Cloud (Amazon VPC). How can I troubleshoot this?Short descriptionImportant: For information about allocating, describing, tagging, associating, disassociating, releasing, or recovering Elastic IP addresses, see Work with Elastic IP Addresses.If you receive an InvalidAddress.NotFound error when trying to recover an Elastic IP address, the Elastic IP address can’t be recovered.If you receive an AddressLimitExceeded error when launching a new Amazon Elastic Compute Cloud (Amazon EC2) instance or allocating an Elastic IP address, you’ve exceeded the limit for Elastic IP addresses for each AWS Region.ResolutionInvalidAddress.NotFound errorsFor reasons why an Elastic IP address might not be recoverable, see Recover an Elastic IP address.AddressLimitExceeded errorsUse the Service quotas console to request a limit increase for Elastic IP addresses. Access the page for EC2-VPC Elastic IPs, and then choose Request quota increase.Related informationRelease an Elastic IP addressHow do I resolve the error "The address with allocation id cannot be released because it is locked to your account" when trying to release an Elastic IP address from my Amazon EC2 instance?How do I manage my service quotas?Follow"
https://repost.aws/knowledge-center/troubleshoot-eip-address-errors
Why aren't my DNS queries forwarded to the DNS servers set on my Client VPN endpoint?
Why aren't my DNS queries forwarded to the DNS servers set on my AWS Client VPN endpoint?
"Why aren't my DNS queries forwarded to the DNS servers set on my AWS Client VPN endpoint?Short descriptionWhile your client is connected to a Client VPN endpoint with target DNS servers configured, you might notice that DNS queries for NSLOOKUP are forwarded to the client machine’s local DNS server. These queries aren't forwarded as expected to the DNS servers configured on the endpoint. This behavior is due to a faulty binding order in Windows (including Windows 2000/XP/7). The faulty binding causes OpenVPN clients to use the default network adapter’s DNS settings rather than the VPN adapter’s settings. To resolve this issue, change the binding order in Windows Registry to prefer the TAP-Windows Adapter V9.ResolutionChange the binding order by modifying the interface metric value for the interfaces. You can modify the interface metric using the AWS Command Line Interface (AWS CLI) or Control Panel in Windows.Modify the interface metric value using the AWS CLINote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.1.    Connect to the Client VPN endpoint using the AWS Client VPN service.2.    Open Command Prompt or PowerShell in Administrator mode.3.    Run ipconfig /all to get a list of Ethernet adapters.4.    Note the Ethernet interface number with an exact description of "TAP-Windows Adapter V9".5.    Run the following command:netsh interface ipv4 set interface "Ethernet 4" metric="1"Note: Be sure to use the appropriate Ethernet adapter interface number while executing the previous command.After running the command, you receive an "Ok" code that indicates a successful implementation. If you run NSLOOKUPs now, you can see that the DNS queries are forwarded to the DNS servers configured on the Client VPN endpoint.Modify the interface metric value using Control Panel in Windows1.    Open Control Panel.2.    Choose Network and Internet, and then choose Network Connections.3.    Right-click the TAP-Windows Adapter V9 tap adapter.4.    Choose Properties, and then choose Internet Protocol Version 4.5.    Choose Properties, and then choose Advanced.6.    Clear the Automatic Metric box.7.    Enter 1 for Interface Metric.8.    Choose OK.Important: The previous two methods apply only to Windows 2000/XP/7 systems. For Windows 10 machines, configure the interface metric using the Set-NetIPInterface PowerShell command.Set-NetIPInterface -InterfaceIndex 4 -InterfaceMetric 1"InterfaceIndex" is the interface number and "InterfaceMetric" denotes the metric value.After implementing the workaround, run the following command to check the preferred DNS servers:netsh interface ip show configFollow"
https://repost.aws/knowledge-center/client-vpn-fix-dns-query-forwarding
Why aren’t Amazon S3 event notifications delivered to an Amazon SQS queue that uses server-side encryption?
"Amazon Simple Storage Service (Amazon S3) event notifications aren't getting delivered to my Amazon Simple Queue Service (Amazon SQS) queue. For example, I’m not receiving Amazon S3 ObjectCreated event notifications when an object is uploaded to the S3 bucket. My Amazon SQS queue has server-side encryption (SSE) turned on.How can I receive S3 event notifications to an Amazon SQS queue that uses SSE?"
"Amazon Simple Storage Service (Amazon S3) event notifications aren't getting delivered to my Amazon Simple Queue Service (Amazon SQS) queue. For example, I’m not receiving Amazon S3 ObjectCreated event notifications when an object is uploaded to the S3 bucket. My Amazon SQS queue has server-side encryption (SSE) turned on.How can I receive S3 event notifications to an Amazon SQS queue that uses SSE?ResolutionTo configure and send S3 event notifications to an Amazon SQS queue that uses SSE, follow these steps:Create a customer-managed AWS KMS key and configure the key policyYou can encrypt Amazon SQS queues and Amazon Simple Notification Service (Amazon SNS) topics with a customer managed AWS Key Management Service (AWS KMS) key. However, you must grant the Amazon S3 service principal permissions to work with encrypted topics or queues.Note: The default AWS managed KMS key can't be modified. You must use a customer managed key for the following process and add permissions to the KMS key to allow access to a specified service principal.To grant the Amazon S3 service principal permissions, add the following statement to the customer managed key policy:Note: Replace "arn:aws:iam::"111122223333":root" with your root account Amazon Resource Name (ARN).{ "Version": "2012-10-17", "Id": "example-ID", "Statement": [ { "Sid": "example-statement-ID", "Effect": "Allow", "Principal": { "Service": "s3.amazonaws.com" }, "Action": [ "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "*" }, { "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::"111122223333":root" }, "Action": "kms:*", "Resource": "*" } ]}Create an SQS queue and grant Amazon S3 permissions1.    Create an Amazon SQS queue configured to use SSE. For more information, see Configuring server-side encryption (SSE) for a queue (console).2.    To allow Amazon S3 to send messages to the queue, add the following permissions statement to the SQS queue:Note: Replace the Resource value with your SQS queue ARN, aws:SourceAccount with your AWS source account ID, and aws:SourceArn with your Amazon S3 bucket ARN.{ "Version": "2012-10-17", "Id": "example-ID", "Statement": [ { "Sid": "example-statement-ID", "Effect": "Allow", "Principal": { "Service": "s3.amazonaws.com" }, "Action": "SQS:SendMessage", "Resource": "arn:aws:sqs:us-east-1:111122223333:sqs-s3-kms-same-account", "Condition": { "StringEquals": { "aws:SourceAccount": "123456789" }, "ArnLike": { "aws:SourceArn": "arn:aws:s3:*:*:hellobucket" } } } ]}In the preceding example permissions statement, the S3 bucket hellobucket, owned by customer account 123456789, can send ObjectCreated event notifications to the specified SQS queue.Create an S3 eventTo add an Amazon S3 event for your bucket, follow these steps:1.    Open the S3 console, and then choose the hyperlinked Name for your S3 bucket.2.    From the Properties tab, choose Create event notification.For Event name, enter a name.For Event types, select the event types that you want to receive notifications for.For Destination, choose SQS queue.For SQS queue, choose your queue.3.    Choose Save changes.Related informationAmazon S3 event notificationsKey managementFollow"
https://repost.aws/knowledge-center/sqs-s3-event-notification-sse
How do I launch an EC2 instance from a custom AMI?
I want to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance from a custom Amazon Machine Image (AMI).
"I want to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance from a custom Amazon Machine Image (AMI).ResolutionTo launch a new EC2 instance from an AMI, do the following:Open the EC2 console.Note: Be sure to select the AWS Region that you want to launch the instance in.From the navigation bar, choose AMIs.Find the AMI that you want to use to launch a new instance. To begin, open the menu next to the search bar, and then choose one of the following:If the AMI that you’re using is one that you created, select Owned by me.If the AMI that you’re using is a public AMI, select Public images.If the AMI that you’re using is a private image that someone else shared with you, select Private images.Note: The search bar automatically provides filtering options as well as automatically matching AMI IDs.Select the AMI, and then choose Launch.Choose an instance type, and then choose Next: Configure Instance Details. Optionally select configuration details, such as associating an IAM role with the instance.Select Next: Add Storage. You can use the default root volume type, or select a new type from the Volume Type dropdown list. Select Add New Volume if you want to add additional storage to your instance.Select Next: Add Tags. You can add custom tags to your instance to help you categorize your resources.Select Next: Configure Security Group. You can associate a security group with your instance to allow or block traffic to the instance.Select Review and Launch. Review the instance details.Select Previous to return to a previous screen to make changes. Select Launch when you are ready to launch the instance.Select an existing key pair or create a new key pair, select the acknowledge agreement box, and then choose Launch Instances.Choose View Instances to check the status of your instance.Related informationInstances and AMIsAmazon Machine Images (AMI)Follow"
https://repost.aws/knowledge-center/launch-instance-custom-ami
How do I stop my Amazon EC2 Windows instance from signaling back as CREATE_COMPLETE before the instance finishes bootstrapping?
"I'm bootstrapping an Amazon Elastic Compute Cloud (Amazon EC2) Windows instance using the cfn-init (cfn-init.exe) and cfn-signal (cfn-signal.exe) helper scripts in AWS CloudFormation. My cfn-init script signals back too early. Then, AWS CloudFormation marks my Windows instance as CREATE_COMPLETE before the instance finishes bootstrapping."
"I'm bootstrapping an Amazon Elastic Compute Cloud (Amazon EC2) Windows instance using the cfn-init (cfn-init.exe) and cfn-signal (cfn-signal.exe) helper scripts in AWS CloudFormation. My cfn-init script signals back too early. Then, AWS CloudFormation marks my Windows instance as CREATE_COMPLETE before the instance finishes bootstrapping.Short descriptionIn a Windows instance, UserData scripts are executed by the Ec2ConfigService process. UserData invokes cfn-init.exe, which runs as a child process of Ec2ConfigService.Your Windows instance could be signaling back as CREATE_COMPLETE for the following reasons:If one of the steps executed by cfn-init.exe requires a system reboot, the system can shut down, and then return execution back to the Ec2ConfigService process. The system continues processing the UserData script, and then executes cfn-signal.exe, which signals back to AWS CloudFormation.The cfn-signal doesn't signal back after a reboot, because UserData runs only once.In the code examples below, cfn-signal.exe is invoked directly from UserData. If the cfn-init.exe process performs a reboot, then the cfn-signal.exe command can't be invoked, because UserData runs only once.JSON example:"UserData": { "Fn::Base64": { "Fn::Join": [ "", [ "<script>\n", "cfn-init.exe -v -s ", { "Ref": "AWS::StackId" }, " -r WindowsInstance", " --configsets ascending", " --region ", { "Ref": "AWS::Region" }, "\n", "cfn-signal.exe -e %ERRORLEVEL% --stack ", { "Ref": "AWS::StackId" }, " --resource WindowsInstance --region ", { "Ref": "AWS::Region" }, "\n", "</script>" ] ] }}YAML example:UserData: Fn::Base64: !Sub | <script> cfn-init.exe -v -s ${AWS::StackId} -r WindowsInstance --configsets ascending --region ${AWS::Region} cfn-signal.exe -e %ERRORLEVEL% --stack ${AWS::StackId} --resource WindowsInstance --region ${AWS::Region} </script>Resolution1.    Use configsets in the cfn-init Metadata section of your template to separate the configurations that require a reboot from the configurations that don't require a reboot.2.    Move the cfn-signal.exe from the UserData section of the AWS::EC2::Instance or AWS::AutoScaling::LaunchConfiguration resource to the AWS::CloudFormation::Init Metadata section of the template.3.    Execute cfn-signal.exe as the last command run by the last configset.4.    In your JSON or YAML template, change UserData to cfn-init and specify the ascending configset.JSON example:{ "AWSTemplateFormatVersion": "2010-09-09", "Description": "cfn-init example using configsets", "Parameters": { "AMI": { "Type": "AWS::EC2::Image::Id" } }, "Resources": { "WindowsInstance": { "Type": "AWS::EC2::Instance", "Metadata": { "AWS::CloudFormation::Init": { "configSets": { "ascending": [ "config1", "config2", "config3" ] }, "config1": { "files": { "C:\\setup\\setenvironment.ps1": { "content": { "Fn::Join": [ "", [ "$Folder = 'C:\\Program Files\\Server\\packages\\bin.20182.18.0826.0815\\'\n", "$OldPath = [System.Environment]::GetEnvironmentVariable('path')\n", "$NewPath = $OldPath + ';' + $Folder\n", "[System.Environment]::SetEnvironmentVariable('path',$NewPath,'Machine')" ] ] } } } }, "config2": { "commands": { "0-restart": { "command": "powershell.exe -Command Restart-Computer", "waitAfterCompletion": "forever" } } }, "config3": { "commands": { "01-setenvironment": { "command": "powershell.exe -ExecutionPolicy Unrestricted C:\\setup\\setenvironment.ps1", "waitAfterCompletion": "0" }, "02-signal-resource": { "command": { "Fn::Join": [ "", [ "cfn-signal.exe -e %ERRORLEVEL% --resource WindowsInstance --stack ", { "Ref": "AWS::StackName" }, " --region ", { "Ref": "AWS::Region" } ] ] } } } } } }, "Properties": { "ImageId": { "Ref": "AMI" }, "InstanceType": "t2.medium", "UserData": { "Fn::Base64": { "Fn::Join": [ "", [ "<script>\n", "cfn-init.exe -v -s ", { "Ref": "AWS::StackId" }, " -r WindowsInstance", " --configsets ascending", " --region ", { "Ref": "AWS::Region" }, "</script>" ] ] } } }, "CreationPolicy": { "ResourceSignal": { "Count": "1", "Timeout": "PT30M" } } } }}YAML example:AWSTemplateFormatVersion: '2010-09-09'Description: cfn-init example using configsetsParameters: AMI: Type: 'AWS::EC2::Image::Id'Resources: WindowsInstance: Type: 'AWS::EC2::Instance' Metadata: AWS::CloudFormation::Init: configSets: ascending: - config1 - config2 - config3 config1: files: C:\setup\setenvironment.ps1: content: !Sub | $Folder = 'C:\Program Files\Server\packages\bin.20182.18.0826.0815\' $OldPath = [System.Environment]::GetEnvironmentVariable('path') $NewPath = $OldPath + ';' + $Folder [System.Environment]::SetEnvironmentVariable('path',$NewPath,'Machine') config2: commands: 0-restart: command: powershell.exe -Command Restart-Computer waitAfterCompletion: forever config3: commands: 01-setenvironment: command: powershell.exe -ExecutionPolicy Unrestricted C:\setup\setenvironment.ps1 waitAfterCompletion: '0' 02-signal-resource: command: !Sub > cfn-signal.exe -e %ERRORLEVEL% --resource WindowsInstance --stack ${AWS::StackName} --region ${AWS::Region} Properties: ImageId: !Ref AMI InstanceType: t2.medium UserData: Fn::Base64: !Sub | <script> cfn-init.exe -v -s ${AWS::StackId} -r WindowsInstance --configsets ascending --region ${AWS::Region} </script> CreationPolicy: ResourceSignal: Count: 1 Timeout: PT30MIn the preceding templates, the signal is no longer running in UserData, which means that you can't retrieve the exit code provided by the cfn-init process. By default, AWS CloudFormation fails to create or update the stack if a signal isn't received from UserData or Metadata. Then, the stack returns a "timeout exceeded" error.Tip: To troubleshoot any failures, use the logs at c:\cfn\log in the Windows instance.5.    Set the waitAfterCompletion parameter to forever.Note: The default value of waitAfterCompletion is 60 seconds. If you change the value to forever, cfn-init exits and then resumes only after the reboot is complete.Related informationBootstrapping AWS CloudFormation Windows stacksAWS CloudFormation User GuideFollow"
https://repost.aws/knowledge-center/create-complete-bootstrapping
How do I install the AWS SCT and database drivers for Windows to convert the database schema for my AWS DMS task?
I need to install the AWS Schema Conversion Tool and database drivers for Windows for my AWS Database Migration Service (AWS DMS) task.
"I need to install the AWS Schema Conversion Tool and database drivers for Windows for my AWS Database Migration Service (AWS DMS) task.Short descriptionThe AWS SCT automatically converts the source database schema and most custom code to a format that's compatible with the target database. Any data that can't be converted is marked so that you can convert the data manually during your migration.ResolutionFirst, install the AWS SCT on your local system, and then install the required database driver.Install the AWS SCTNote: The following steps are for Windows OS to install the AWS SCT.Download the zip folder, for example, aws-schema-conversion-tool-1.0.latest.Extract the AWS Schema Conversion Tool .msi file, for example, AWS Schema Conversion Tool-build-number.msi.Open the folder.Run the AWS SCT installer file.Install the database driverNote: The following steps install the database drivers for an environment running MySQL with a Java Database Connectivity (JDBC) driver for Windows.Download the connector from your database engine's documentation. For more information, see Installing the Required Database Drivers.Select JDBC Driver for MySQL, and then choose Download.Download the platform-independent (architecture-independent) ZIP archive or TAR archive.Unzip the file, for example, mysql-connector-java-5.1.42.Create a directory, and then move the files to the new directory, for example, C:\java-5-1-42.Note or copy that file location.Set the driver location globally in the AWS SCTOpen the AWS SCT, and then from Settings, choose Global Settings.Choose Drivers.In the MySql Driver Path, choose Browse.Go to the location of your driver that you noted previously, for example, C:\java-5-1-42.Open the folder and choose the executable jar file, for example, mysql-connector-java-5.1.42-bin.Choose OK.Related informationWhat Is the AWS Schema Conversion Tool?Installing, verifying, and updating the AWS Schema Conversion ToolUsing the AWS Schema Conversion Tool user interfaceMigration strategy for relational databasesFollow"
https://repost.aws/knowledge-center/dms-sct-install-drivers-windows
How do I return an EKS Anywhere cluster to a working state when the cluster upgrade fails?
"I want to use the eksctl command to upgrade an Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere management cluster. However, the upgrade process fails or is interrupted before completion."
"I want to use the eksctl command to upgrade an Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere management cluster. However, the upgrade process fails or is interrupted before completion.ResolutionWhen you upgrade an Amazon EKS Anywhere management cluster, the process includes two phases: the verification phase and upgrade phase. The recovery steps for a failed upgrade depend on which phase of the upgrade was interrupted.Verification phaseWhen you upgrade an EKS Anywhere cluster, eksctl runs a set of preflight checks to make sure that your cluster is ready. This occurs before the upgrade, and eksctl modifies your cluster to match the updated specification.When eksctl performs these checks, you see a message that's similar to the following example:Performing setup and validations Connected to server Authenticated to vSphere Datacenter validated Network validatedCreating template. This might take a while. Datastore validated Folder validated Resource pool validated Datastore validated Folder validated Resource pool validated Datastore validated Folder validated Resource pool validated Machine config tags validated Control plane and Workload templates validated Vsphere provider validation Validate certificate for registry mirror Control plane ready Worker nodes ready Nodes ready Cluster CRDs ready Cluster object present on workload cluster Upgrade cluster kubernetes version increment Validate authentication for git provider Validate immutable fields Upgrade preflight validations passNext, eksctl continues to verify CAPI controllers that run in your management cluster. If any of these controllers need an upgrade, then eksctl also upgrades them. During this process, eksctl also creates a KinD bootstrap cluster to upgrade your management cluster. You see a message that reflects this process:Ensuring etcd CAPI providers exist on management cluster before Pausing EKS-A cluster controller reconcilePausing GitOps cluster resources reconcileUpgrading core componentsCreating bootstrap clusterProvider specific pre-capi-install-setup on bootstrap clusterProvider specific post-setupInstalling cluster-api providers on bootstrap clusterIf any of these checks or actions fail, then the upgrade stops and your management cluster remains in the same original version.For more details about the specific check that failed, check the eksctl logs.Issues during the verification phaseTo recover from a failure at this phase, complete the following steps:1.    Troubleshoot and fix the problem that caused the verification to fail.2.    Run the eksctl anywhere cluster upgrade command again. It's a best practice to use the -v9 flag.Upgrade phaseIn the upgrade phase, eksctl performs the following main actions:Moves your management cluster CAPI objects (such as machines, KubeadmControlPlane, and EtcdadmCluster)to the bootstrap clusterUpgrades the etcd and control plane componentsUpgrades the worker node componentsDuring this phase, you see a message that's similar to the following example:Moving cluster management from bootstrap to workload clusterApplying new EKS-A cluster resourceResuming EKS-A controller reconcileUpdating Git Repo with new EKS-A cluster specGitOps field not specified, update git repo skippedForcing reconcile Git repo with latest commitGitOps not configured, force reconcile flux git repo skippedResuming GitOps cluster resources kustomizationWriting cluster config file Cluster upgraded!eksctl uses a rolling process to perform the upgrade in place, similar to Kubernetes deployments. It also creates a new virtual machine (VM) with this upgrade, and then it removes the old VM. This process applies to each component, one at a time, until all control plane components are upgraded.If a VM fails to run, then the upgrade fails and stops after a set timeout interval. The rolling process keeps the old VM running to make sure that your cluster remains in the Ready state.Issues during the upgrade phaseTo recover from a failure during this phase, complete the following steps:1.    Troubleshoot and fix the problem that caused the upgrade to fail. Check the eksctl logs for details about the failure.2.    To facilitate the recovery process, set up an environment variable:CLUSTER_NAME: The name of your clusterCLITOOLS_CONT: The name of the container that runs image cli-tools left in your environment after upgrade interruptionKINDKUBE: The Kubeconfig file that you use to access the KinD bootstrap clusterMGMTKUBE: The Kubeconfig file that you use to access your management clusterEKSA_VSPHERE_USERNAME and EKSA_VSPHERE_PASSWORD: Credentials to access vCenterSee the following example of these variables:CLUSTER_NAME=cluster3CLITOOLS_CONT=eksa_1681572481616591501KINDKUBE=$CLUSTER_NAME/generated/${CLUSTER_NAME}.kind.kubeconfigMGMTKUBE=$CLUSTER_NAME/$CLUSTER_NAME-eks-a-cluster.kubeconfigEKSA_VSPHERE_USERNAME=xxxxxEKSA_VSPHERE_PASSWORD=yyyyy3.    Make sure that your management cluster CAPI components, such as machines and clusters, are in the Ready state. Also, make sure that kubeApi-server in your management cluster is responsive. To do this, run the following command:kubectl --kubeconfig $KINDKUBE -n eksa-system get machinesdocker exec -i $CLITOOLS_CONT clusterctl describe cluster cluster3 --kubeconfig $KINDKUBE -n eksa-systemkubectl --kubeconfig $MGMTKUBE -n kube-system get nodeYou receive an output that's similar to the following example:NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSIONcluster3-2snw8 cluster3 cluster3-2snw8 vsphere://4230efe1-e1f5-c8e5-9bff-12eca320f5db Running 3m13s v1.23.17-eks-1-23-19cluster3-etcd-chkc5 cluster3 vsphere://4230826c-b25d-937a-4728-3e607e6af579 Running 4m14scluster3-md-0-854976576-tw6hr cluster3 cluster3-md-0-854976576-tw6hr vsphere://4230f2e5-0a4b-374c-f06b-41ac1f80e41f Running 4m30s v1.22.17-eks-1-22-24$ docker exec -i $CLITOOLS_CONT clusterctl describe cluster cluster3 --kubeconfig $KINDKUBE -n eksa-systemNAME READY SEVERITY REASON SINCE MESSAGECluster/cluster3 True 49s├─ClusterInfrastructure - VSphereCluster/cluster3 True 4m53s├─ControlPlane - KubeadmControlPlane/cluster3 True 49s│ └─Machine/cluster3-2snw8 True 2m51s└─Workers ├─MachineDeployment/cluster3-md-0 True 4m53s │ └─Machine/cluster3-md-0-854976576-tw6hr True 4m53s └─Other └─Machine/cluster3-etcd-chkc5 True 3m55s $ kubectl --kubeconfig $MGMTKUBE -n kube-system get nodeNAME STATUS ROLES AGE VERSIONcluster3-md-0-854976576-tw6hr Ready [none] 18m v1.22.17-eks-a51510bcluster3-2snw8 Ready control-plane,master 19m v1.23.17-eks-a51510b4.    Back up your management cluster CAPI components:mkdir ${CLUSTER_NAME}-backupdocker exec -i $CLITOOLS_CONT clusterctl move --to-directory ${CLUSTER_NAME}-backup --kubeconfig $KINDKUBE -n eksa-system5.    Move your management cluster CAPI components back to your management cluster:$ docker exec -i $CLITOOLS_CONT clusterctl move --to-kubeconfig $MGMTKUBE --kubeconfig $KINDKUBE -n eksa-systemPerforming move...Discovering Cluster API objectsMoving Cluster API objects Clusters=1Moving Cluster API objects ClusterClasses=0Creating objects in the target clusterDeleting objects from the source clusterYou receive an output that's similar to the following example:$ docker exec -i $CLITOOLS_CONT clusterctl move --to-kubeconfig $MGMTKUBE --kubeconfig $KINDKUBE -n eksa-systemPerforming move...Discovering Cluster API objectsMoving Cluster API objects Clusters=1Moving Cluster API objects ClusterClasses=0Creating objects in the target clusterDeleting objects from the source cluster6.    Make sure that management cluster CAPI components, such as machines and clusters, are no longer in the KinD bootstrap cluster. Verify that they show up in the management cluster. To do this, run the following commands:kubectl --kubeconfig $KINDKUBE -n eksa-system get cluster -n eksa-systemkubectl --kubeconfig $MGMTKUBE get machines -n eksa-systemYou receive an output that's similar to the following example:$ kubectl --kubeconfig $KINDKUBE -n eksa-system get cluster -n eksa-systemNo resources found in eksa-system namespace.$ kubectl --kubeconfig $MGMTKUBE get machines -n eksa-systemNAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSIONcluster2-4n7qd cluster2 cluster2-4n7qd vsphere://4230fb07-2823-3474-c41f-b7223dec3089 Running 2m27s v1.23.17-eks-1-23-19cluster2-etcd-h4tpl cluster2 vsphere://42303b36-1991-67a9-e942-dd9959760649 Running 2m27scluster2-md-0-fd6c558b-6cfvq cluster2 cluster2-md-0-fd6c558b-6cfvq vsphere://423019a3-ad3f-1743-e7a8-ec8772d3edc2 Running 2m26s v1.22.17-eks-1-22-247.    Run the upgrade again. Use the flags --force-cleanup -v9 flag:eksctl anywhere upgrade cluster -f cluster3/cluster3-eks-a-cluster.yaml --force-cleanup -v9Related informationUpgrade vSphere, CloudStack, Nutanix, or Snow clusterEKS-A troubleshootingThe Cluster API Book (on the Kubernetes website)Follow"
https://repost.aws/knowledge-center/eks-anywhere-return-cluster-upgrade-fail
How do I create an Amazon QuickSight Enterprise edition account?
I want to create an Amazon QuickSight Enterprise edition account. How can I do that?
"I want to create an Amazon QuickSight Enterprise edition account. How can I do that?ResolutionAmazon QuickSight offers Standard and Enterprise editions. For more information, see Different editions of Amazon QuickSight.Prerequisite: If you use AWS Directory Service for Microsoft Active Directory, create these three groups: administrators, readers, and authors. For more information, see Managing user accounts in Amazon QuickSight Enterprise edition.1.    Open the Amazon QuickSight console, and then choose Sign up for QuickSight.2.    Choose Enterprise.3.    Choose Continue.Note: To sign up as an educator, a student, or an existing AWS user, see Setting up Amazon QuickSight.4.    Choose Use Role Based Federation (SSO) or Use Active Directory.5.    (Optional) If your directory isn't listed, choose Refresh list.6.    Enter your QuickSight account name and Notification email address, and then select your QuickSight capacity region.7.    (Optional) To enable autodiscovery of data from other AWS services, choose the service that you want Amazon QuickSight to visualize. For more information, see Allowing autodiscovery of AWS resources.8.    Select your groups and users.9.    Choose Authorize.10.    Choose Finish.Note: Active Directory users aren't automatically notified when they're added to an Amazon QuickSight group.For more information about how to create an Amazon QuickSight account, see Signing Up for Amazon QuickSight.Related informationSingle sign-on access to Amazon QuickSight using SAML 2.0Working with data sources in Amazon QuickSightSharing analysesFollow"
https://repost.aws/knowledge-center/quicksight-enterprise-account
How can I mount an Amazon EFS volume to AWS Batch in a managed compute environment?
I want to mount an Amazon Elastic File System (Amazon EFS) volume in AWS Batch. How can I do that in a managed compute environment without creating custom Amazon Machine Images (AMIs)?
"I want to mount an Amazon Elastic File System (Amazon EFS) volume in AWS Batch. How can I do that in a managed compute environment without creating custom Amazon Machine Images (AMIs)?Short descriptionNote: AWS Batch now supports mounting EFS volumes directly to the containers that are created, as part of the job definition. This is a simpler method than the resolution noted in this article. For more information, see Specifying an Amazon EFS file system in your job definition and the efsVolumeConfiguration parameter in Container properties.Use a launch template to mount an Amazon EFS volume to an EC2 instance and then to a container. This also allows you to mount the EFS volume to containers through AWS Batch without creating a custom AMI.Important: When you create an Amazon EFS volume, use the same Amazon Virtual Private Cloud (Amazon VPC) and subnets that are assigned to your compute environment.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version .1.    Create an Amazon EFS file system.2.    Note the file system ID (for example: fs-12345678). You need the file system ID to run your launch template.3.    Create a launch template that includes a user data section and uses the MIME multi-part file format. For more information, see Mime Multi Part Archive on the Cloud-init website.Example MIME multi-part fileNote: The following example MIME multi-part file configures the compute resource to install the amazon-efs-utils package. Then, the file mounts an existing Amazon EFS file system at /mnt/efs:.MIME-Version: 1.0Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="--==MYBOUNDARY==Content-Type: text/cloud-config; charset="us-ascii"packages:- amazon-efs-utilsruncmd:- file_system_id_01=fs-12345678- efs_directory=/mnt/efs- mkdir -p ${efs_directory}- echo "${file_system_id_01}:/ ${efs_directory} efs tls,_netdev" >> /etc/fstab- mount -a -t efs defaults--==MYBOUNDARY==--Important: Replace fs-12345678 with your file system ID.4.    Create a file called mount-efs.json.Note: Adjust the size of your volume based on your needs.Example Amazon Linux 2 launch template{ "LaunchTemplateName": "user-data", "LaunchTemplateData": { "BlockDeviceMappings": [ { "Ebs": { "DeleteOnTermination": true, "VolumeSize": 30, "VolumeType": "gp2" }, "DeviceName": "/dev/xvda" } ], "UserData": "TUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PU1ZQk9VTkRBUlk9PSIKCi0tPT1NWUJPVU5EQVJZPT0KQ29udGVudC1UeXBlOiB0ZXh0L2Nsb3VkLWNvbmZpZzsgY2hhcnNldD0idXMtYXNjaWkiCgpwYWNrYWdlczoKLSBhbWF6b24tZWZzLXV0aWxzCgpydW5jbWQ6Ci0gZmlsZV9zeXN0ZW1faWRfMDE9ZnMtODc0MTc4MDYgICAgIAotIGVmc19kaXJlY3Rvcnk9L21udC9lZnMKCi0gbWtkaXIgLXAgJHtlZnNfZGlyZWN0b3J5fQotIGVjaG8gIiR7ZmlsZV9zeXN0ZW1faWRfMDF9Oi8gJHtlZnNfZGlyZWN0b3J5fSBlZnMgdGxzLF9uZXRkZXYiID4+IC9ldGMvZnN0YWIKLSBtb3VudCAtYSAtdCBlZnMgZGVmYXVsdHMKCi0tPT1NWUJPVU5EQVJZPT0tLQ==" }}Example Amazon Linux 1 launch template{ "LaunchTemplateName": "userdata", "LaunchTemplateData": { "BlockDeviceMappings": [ { "Ebs": { "DeleteOnTermination": true, "VolumeSize": 8, "VolumeType": "gp2" }, "DeviceName": "/dev/xvda" }, { "Ebs": { "DeleteOnTermination": true, "VolumeSize": 22, "VolumeType": "gp2" }, "DeviceName": "/dev/xvdcz" } ], "UserData": "TUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PU1ZQk9VTkRBUlk9PSIKCi0tPT1NWUJPVU5EQVJZPT0KQ29udGVudC1UeXBlOiB0ZXh0L2Nsb3VkLWNvbmZpZzsgY2hhcnNldD0idXMtYXNjaWkiCgpwYWNrYWdlczoKLSBhbWF6b24tZWZzLXV0aWxzCgpydW5jbWQ6Ci0gZmlsZV9zeXN0ZW1faWRfMDE9ZnMtODc0MTc4MDYgICAgIAotIGVmc19kaXJlY3Rvcnk9L21udC9lZnMKCi0gbWtkaXIgLXAgJHtlZnNfZGlyZWN0b3J5fQotIGVjaG8gIiR7ZmlsZV9zeXN0ZW1faWRfMDF9Oi8gJHtlZnNfZGlyZWN0b3J5fSBlZnMgdGxzLF9uZXRkZXYiID4+IC9ldGMvZnN0YWIKLSBtb3VudCAtYSAtdCBlZnMgZGVmYXVsdHMKCi0tPT1NWUJPVU5EQVJZPT0tLQ==" }}Important: If you add user data to a launch template in the Amazon Elastic Compute Cloud (Amazon EC2) console, then make sure that you do one of the following:Paste in the user data as plaintext.-or-Upload the user data from a file.If you use the AWS CLI or an AWS SDK, you must first base64-encode the user data. Then, submit that string as the value of the UserData parameter when you call CreateLaunchTemplate, as shown in the example JSON template.5.    Run the following AWS CLI command to create a launch template based on the mount-efs.json file that you created in step 4:aws ec2 --region us-east-1 create-launch-template --cli-input-json file://mount-efs.jsonNote: Replace us-east-1 with your AWS Region.Example create-launch-template command output{ "LaunchTemplate": { "LaunchTemplateId": "lt-06935eb650e40f886", "LaunchTemplateName": "user-data", "CreateTime": "2019-12-26T09:40:46.000Z", "CreatedBy": "arn:aws:iam::12345678999:user/alice", "DefaultVersionNumber": 1, "LatestVersionNumber": 1 }}6.    Create a new compute environment and associate that environment with your launch template.Note: When AWS Batch spins up instances, the Amazon EFS volume is now mounted on the instances.7.    Check if the Amazon EFS volume is mounted with the container instance by using SSH to connect to the instance launched by AWS Batch. Then, run the following Linux df command:$ df -hExample df command outputFilesystem Size Used Avail Use% Mounted ondevtmpfs 3.9G 92K 3.9G 1% /devtmpfs 3.9G 0 3.9G 0% /dev/shm/dev/xvda1 50G 854M 49G 2% /127.0.0.1:/ 8.0E 0 8.0E 0% /mnt/efsNote: /mnt/efs is mounted automatically.8.    Create a job definition in AWS Batch that includes the volume and mount point.Example AWS Batch Job definition{ "jobDefinitionName": "userdata", "jobDefinitionArn": "arn:aws:batch:us-east-1:12345678999:job-definition/userdata:1", "revision": 1, "status": "ACTIVE", "type": "container", "parameters": {}, "containerProperties": { "image": "busybox", "vcpus": 1, "memory": 1024, "command": [], "volumes": [ { "host": { "sourcePath": "/mnt/efs" }, "name": "efs" } ], "environment": [], "mountPoints": [ { "containerPath": "/mnt/efs", "sourceVolume": "efs" } ], "ulimits": [], "resourceRequirements": [] }}9.    Submit an AWS Batch job using the job definition that you created in step 8.Follow"
https://repost.aws/knowledge-center/batch-mount-efs
How can I copy rules from an existing security group to a new security group?
I have a security group in my Amazon Virtual Private Cloud (Amazon VPC). That security group has rules that I want to migrate to another VPC. How can I copy rules from an existing security group to a new security group?
"I have a security group in my Amazon Virtual Private Cloud (Amazon VPC). That security group has rules that I want to migrate to another VPC. How can I copy rules from an existing security group to a new security group?ResolutionYou can copy rules from a security group to a new security group created within the same AWS Region.Open the Amazon Elastic Compute Cloud (Amazon EC2) console.In the navigation pane, choose Security Groups.Select the security group that you want to copy.For Actions, choose Copy to new Security Group.The Create Security Group dialog box opens, and is populated with the rules from your existing security group.Specify a Security group name and Description for your new security group.For VPC, choose the ID of the VPC.Choose Create.Related informationControl traffic to resources using security groupsFollow"
https://repost.aws/knowledge-center/vpc-copy-security-group-rules
Why didn’t Amazon EC2 Auto Scaling terminate an unhealthy instance?
"I have an Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling group set up, but it's not terminating an unhealthy Amazon EC2 instance. How can I fix this?"
"I have an Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling group set up, but it's not terminating an unhealthy Amazon EC2 instance. How can I fix this?Short descriptionAmazon EC2 Auto Scaling is able to automatically determine the health status of an instance using Amazon EC2 status checks and Elastic Load Balancing (ELB) health checks. All scaling actions of an Amazon EC2 Auto Scaling group are logged in Activity History on the Amazon EC2 console. Sometimes you can't determine why Amazon EC2 Auto Scaling didn't terminate an unhealthy instance from Activity History alone.You can find further details about an unhealthy instance's state, and how to terminate that instance, within the Amazon EC2 console. Check the following settings:Health check grace periodSuspended processesInstance state in the EC2 consoleInstance state in Auto Scaling groupsELB health checksResolutionFirst, note the state of the instance in Amazon EC2 Auto Scaling:Sign in to the Amazon EC2 console. In the navigation pane under Auto Scaling, choose Auto Scaling Groups, and then select the instance's group.Choose the Instances view and note the health state of the instance.Health Check Grace PeriodAmazon EC2 Auto Scaling doesn't terminate an instance that came into service based on EC2 status checks and ELB health checks until the health check grace period expires. To find the grace period length:On the Amazon EC2 console navigation pane, under Auto Scaling, choose Auto Scaling Groups, and then select the instance's group.Choose the Details view and note the Health Check Grace Period length.Suspended ProcessesThe suspension of processes such as HealthCheck, ReplaceUnhealthy, or Terminate affects Amazon EC2 Auto Scaling's capability to detect, replace, or terminate unhealthy instances:Under Auto Scaling in the navigation pane of the Amazon EC2 console navigation pane, choose Auto Scaling Groups, and then select the instance's group.Choose the Details view.Choose Edit and remove any of the following processes from Suspended Processes if they are present: HealthCheck, ReplaceUnhealthy, or Terminate.Choose Save to resume the processes.Instance State in Amazon EC2 ConsoleAmazon EC2 Auto Scaling does not immediately terminate instances with an Impaired status. Instead, Amazon EC2 Auto Scaling waits a few minutes for the instance to recover. To check if an instance is impaired:On the Amazon EC2 console navigation pane, under Instances, choose Instances, and then select the instance.Choose the Status Checks view and note if the instance's status is Impaired.Amazon EC2 Auto Scaling might also delay or not terminate instances that fail to report data for status checks. This usually happens when there is insufficient data for the status check metrics in Amazon CloudWatch. To terminate these instances manually:On the Amazon EC2 console navigation pane, under Instances, choose Instances, and then select the instance.Choose the Monitoring view and note the status of the instance.If the status is Insufficient Data, select the instance again, choose the Actions menu, choose Instance State, and then choose Terminate.Instance State in Auto Scaling GroupAmazon EC2 Auto Scaling does not perform health checks on instances in the Standby state. To set Standby instances back to the InService state:On the Amazon EC2 console navigation pane, under Auto Scaling Groups, select the instance's group, and then choose the Instances view.Choose the filter menu Any Lifecycle State, and then select Standby.To resume health checks, open the context (right-click) menu for an instance, and then choose Set to InService, which exits the Standby state.Amazon EC2 Auto Scaling waits to terminate an instance if it is waiting for a lifecycle hook to complete. To find the lifecycle status and complete the lifecycle hook:On the Amazon EC2 console navigation pane, under Auto Scaling, choose Auto Scaling Groups, and then select the instance's group.Choose the Instances view and note the Lifecycle status for the instance.If the status is terminating:wait, you can check the heartbeat timeout and then run completing-lifecycle-action to complete the lifecycle hook.If Amazon EC2 Auto Scaling is waiting for an ELB connection draining period to complete, it waits to terminate the instance:On the Amazon EC2 console navigation pane, under Auto Scaling, choose Auto Scaling Groups, and then select the instance's group.Choose the Instances view and confirm that the instance's Lifecycle is terminating.Choose the Activity History view.For Filter, select Waiting for ELB connection draining to confirm if the group is waiting to terminate the instance.ELB Health ChecksELB settings can affect health checks and instance replacements. Note the instance's status in on the ELB console:On the Amazon EC2 console navigation pane, under Load Balancing, choose Load Balancers, and then select the load balancer to which the instance is registered.Choose the Instances view and note the instance's status and description.Amazon EC2 Auto Scaling doesn't use the results of ELB health checks to determinate an instance's health status when the group's health check configuration is set to EC2. As a result, Amazon EC2 Auto Scaling doesn't terminate instances that fail ELB health checks. If an instance's status is OutofService on the ELB console, but the instance's status is Healthy on the Amazon EC2 Auto Scaling console, confirm that the health check type is set to ELB:On the Amazon EC2 console navigation pane, under Auto Scaling, choose Auto Scaling Groups, and then select the instance's group.Choose the Details view and note the Health Check Type.Choose Edit and select ELB for Health Check Type, and then choose Save.If the group's health check type is already ELB and the instance's status on the ELB console is OutofService, use the status description that you noted earlier to determine further steps:Instance registration is still in progress: wait for load balancer to complete instance registration and for the instance to enter the InService state.Instance is in the Amazon EC2 Availability Zone for which LoadBalancer is not configured to route traffic to: edit the subnets of the Auto Scaling group or load balancer to be sure they are same as the instance's subnets.Instance hasn't passed the configured HealthyThreshold number of health checks consecutively: wait for ELB to complete health checks and the instance to enter the InService state.Related informationTroubleshooting instances with failed status checksWhy did Amazon EC2 Auto Scaling terminate an instance?Follow"
https://repost.aws/knowledge-center/auto-scaling-terminate-instance
How do I troubleshoot the Client.UnauthorizedOperation error while provisioning CIDR from the IPAM pool?
I want to troubleshoot the Client.UnauthorizedOperation error I get while provisioning CIDR from the Amazon VPC IP Address Manager (IPAM) pool.
"I want to troubleshoot the Client.UnauthorizedOperation error I get while provisioning CIDR from the Amazon VPC IP Address Manager (IPAM) pool.Short descriptionWhen you run the AWS Command Line Interface (AWS CLI) command provision-public-ipv4-pool-cidr from your shared IPAM pool you might get the following error, even though you have administrator access:Client.UnauthorizedOperationYou are not authorized to perform this operation. Encoded authorization failure messageThis error occurs if you didn't use the AWSRAMDefaultPermissionsIpamPool permission set when sharing the pool. Most likely you used the AWSRAMPermissionIpamPoolByoipCidrImport permission set instead. Use this permission only if you've existing BYOIP CIDRs and you want to import them to IPAM.Use the permission AWSRAMDefaultPermissionsIpamPool to allow principals to view CIDRs and allocations and allocate or release CIDRs in the shared pool.Note: The pool is shared from account A to account B. Account B observes this error while provisioning CIDRs. However, you need to resolve the error in account A.For more information on permissions, see Share an IPAM pool using AWS RAM.ResolutionFollow these steps in account A to resolve the error.List the permisssionsUse the CLI command to list the permissions set with the resource share. This returns the ARNs of the permissions.Note: If you receive errors when running the CLI command, make sure that you're using the most recent version of AWS CLI.aws ram list-resource-share-permissions --resource-share-arn <ARN of the resource share of IPAM Pool>Note: Replace <ARN of the resource share of IPAM pool> with the ARN of the shared IPAM pool.Then, run the following CLI command to view the details of the permissions:aws ram get-permission --permission-arn <ARN of the Permission>Note: Replace <ARN of the Permission> with the ARN of the permission in the resource share.Update the resource shareIf the list shows you chose permission AWSRAMPermissionIpamPoolByoipCidrImport, change the permission as follows:Navigate to the Shared by me: Resource shares page in the AWS RAM console.Select the resource share and then choose Modify.Choose Next.Under Associate a managed permission with each resource type, choose AWSRAMDefaultPermissionsIpamPool.Choose Next, Go to Review and Update.Choose Update resource share.Note: If you've updated your permission to AWSRAMDefaultPermissionsIpamPool but get the Client.UnauthorizedOperation error, contact AWS Support.Follow"
https://repost.aws/knowledge-center/vpcipam-fix-clientunathorizedoperation-error
How can I troubleshoot high or full disk usage with Amazon Redshift?
I'm experiencing high or full disk utilization on Amazon Redshift and want to troubleshoot this issue.
"I'm experiencing high or full disk utilization on Amazon Redshift and want to troubleshoot this issue.ResolutionHigh disk usage errors can depend on several factors, including:Distribution and sort keyQuery processingTables with VARCHAR(MAX) columnsHigh column compressionMaintenance operationsCartesian products with cross-joinsMinimum table sizeTombstone blocksCopying a large fileDistribution and sort keyReview the table's distribution style, distribution key, and sort key selection. Tables with distribution skew—where more data is located in one node than in the others—can cause a full disk node. If you have tables with skewed distribution styles, then change the distribution style to a more uniform distribution. Note that distribution and row skew can affect storage skew and intermediate rowset when a query is running. For more information about distribution keys and sort keys, see Amazon Redshift engineering’s advanced table design playbook: preamble, prerequisites, and prioritization.To determine the cardinality of your distribution key, run the following query:SELECT <distkey column>, COUNT(*) FROM <schema name>.<table with distribution skew> GROUP BY <distkey column> HAVING COUNT(*) > 1 ORDER BY 2 DESC;Note: To avoid a sort step, use SORT KEY columns in your ORDER BY clause. A sort step can use excessive memory, causing a disk spill. For more information, see Working with sort keys.In the filtered result set, choose a column with high cardinality to view its data distribution. For more information on the distribution style of your table, see Choose the best distribution style.To see how database blocks in a distribution key are mapped to a cluster, use the Amazon Redshift table_inspector.sql utility.Query processingReview any memory allocated to a query. While a query is processing, intermediate query results can be stored in temporary blocks. If there isn't enough free memory, then the tables cause a disk spill. Intermediate result sets aren't compressed, which affects the available disk space. For more information, see Insufficient memory allocated to the query.Amazon Redshift defaults to a table structure with even distribution and no column encoding for temporary tables. But if you are using SELECT...INTO syntax, use a CREATE statement. For more information, see Top 10 performance tuning techniques for Amazon Redshift. Follow the instructions under Tip #6: Address the inefficient use of temporary tables.If insufficient memory is allocated to your query, you might see a step in SVL_QUERY_SUMMARY where is_diskbased shows the value "true". To resolve this issue, increase the number of query slots to allocate more memory to the query. For more information about how to temporarily increase the slots for a query, see wlm_query_slot_count or tune your WLM to run mixed workloads. You can also use WLM query monitoring rules to counter heavy processing loads and to identify I/O intensive queries.Tables with VARCHAR(MAX) columnsCheck VARCHAR or CHARACTER VARYING columns for trailing blanks that might be omitted when data is stored on the disk. During query processing, trailing blanks can occupy the full length in memory (the maximum value for VARCHAR is 65535). It's a best practice to use the smallest possible column size.To generate a list of tables with maximum column widths, run the following query:SELECT database, schema || '.' || "table" AS "table", max_varchar FROM svv_table_info WHERE max_varchar > 150 ORDER BY 2;To identify and display the true widths of the wide VARCHAR table columns, run the following query:SELECT max(octet_length (rtrim(column_name))) FROM table_name;In the output from this query, validate if the length is appropriate for your use case. If the columns are at maximum length and exceed your needs, adjust their length to the minimum size needed.For more information about table design, review the Amazon Redshift best practices for designing tables.High column compressionEncode all columns (except sort key) by using the ANALYZE COMPRESSION or using the automatic table optimization feature in Amazon Redshift. Amazon Redshift provides column encoding. It's a best practice to use this feature, even though it increases read performance and reduces overall storage consumption.Maintenance operationsBe sure that the database tables in your Amazon Redshift database are regularly analyzed and vacuumed. Identify any queries that running against tables that are missing statistics. Preventing queries from running against tables that are missing statistics keeps Amazon Redshift from scanning unnecessary table rows. This also helps optimize your query processing.Note: Maintenance operations such as VACUUM and DEEP COPY use temporary storage space for their sort operations, so a spike in disk usage is expected.For example, the following query helps you identify outdated stats in Amazon Redshift:SELECT * FROM svv_table_info WHERE stats_off > 10 ORDER BY size DESC;Additionally, use the ANALYZE command to view and analyze table statistics.For more information on maintenance operations, see the Amazon Redshift Analyze & Vacuum schema utility.Cartesian products with cross-joinsUse the EXPLAIN plan of the query to look for queries with Cartesian products. Cartesian products are cross-joins that are unrelated and can produce an increased number of blocks. These cross-joins can result in higher memory utilization and more tables spilled to disk. If cross-joins don't share a JOIN condition, then the joins produce a Cartesian product of two tables. Every row of one table is then joined to every row of the other table.Cross-joins can also be run as nested loop joins, which take the longest time to process. Nested loop joins result in spikes in overall disk usage. For more information, see Identifying queries with nested loops.Minimum table sizeThe same table can have different sizes in different clusters. The minimum table size is then determined by the number of columns and whether the table has a SORTKEY and number of slices populated. If you recently resized an Amazon Redshift cluster, you might see a change in your overall disk storage. This is caused by the change in number of slices. Amazon Redshift also counts the table segments that are used by each table. For more information, see Why does a table in an Amazon Redshift cluster consume more or less disk storage space than expected?Tombstone blocksTombstone blocks are generated when a WRITE transaction to an Amazon Redshift table occurs and there is a concurrent Read. Amazon Redshift keeps the blocks before the write operation to keep a concurrent Read operation consistent. Amazon Redshift blocks can't be changed. Every Insert, Update, or Delete action creates a new set of blocks, marking the old blocks as tombstoned.Sometimes tombstones fail to clear at the commit stage because of long-running table transactions. Tombstones can also fail to clear when there are too many ETL loads running at the same time. Because Amazon Redshift monitors the database from the time that the transaction starts, any table written to the database also retains the tombstone blocks. If long-running table transactions occur regularly and across several loads, enough tombstones can accumulate to result in a Disk Full error.You can also force Amazon Redshift to perform the analysis regarding tombstone blocks by performing a commit command.If there are long-running queries that are active, then terminate the queries (and release all subsequent blocks) using the commit command:begin;create table a (id int);insert into a values(1);commit;drop table a;Then, to confirm tombstone blocks, run the following query:select trim(name) as tablename, count(case when tombstone > 0 then 1 else null end) as tombstones from svv_diskusage group by 1 having count(case when tombstone > 0 then 1 else null end) > 0 order by 2 desc;Copying a large fileDuring a COPY operation, you might receive a Disk Full error even if there is enough storage available. This error occurs if the sorting operation spills to disk, creating temporary blocks.If you encounter a Disk Full error message, then check the STL_DISK_FULL_DIAG table. Check which query ID caused error and the temporary blocks that were created:select '2000-01-01'::timestamp + (currenttime/1000000.0)* interval '1 second' as currenttime,node_num,query_id,temp_blocks from pg_catalog.stl_disk_full_diag;For more best practices, see Amazon Redshift best practices for loading data.Additional troubleshootingCheck the percentage of disk space under the Performance tab in the Amazon Redshift console. For each cluster node, Amazon Redshift provides extra disk space, which is larger than the nominal disk capacity.If you notice a sudden spike in utilization, use the STL_QUERY to identify the activities and jobs that are running. Note which queries are running at the time of a disk spill:select * from stl_query where starttime between '2018-01-01 00:30:00' and '2018-01-01 00:40:00';Note: Update the values with the time when the spike occurred.To identify the top 20 disk spill queries, run the following query:select A.userid, A.query, blocks_to_disk, trim(B.querytxt) text from stl_query_metrics A, stl_query B where A.query = B.query and segment=-1 and step = -1 and max_blocks_to_disk > 0 order by 3 desc limit 20;View the column value blocks_to_disk to identify disk spilling. Terminate queries that are spilling too much, if needed. Then, allocate additional memory to the queries before running them again. For more details, refer to STL_QUERY_METRICS.To determine if your queries are properly writing to a disk, run the following query:SELECT q.query, trim(q.cat_text)FROM (SELECT query,replace( listagg(text,' ') WITHIN GROUP (ORDER BY sequence), '\\n', ' ') AS cat_textFROM stl_querytextWHERE userid>1GROUP BY query) qJOIN (SELECT distinct queryFROM svl_query_summaryWHERE is_diskbased='t' AND (LABEL ILIKE 'hash%' OR LABEL ILIKE 'sort%' OR LABEL ILIKE 'aggr%' OR LABEL ILIKE 'save%' OR LABEL ILIKE 'window%' OR LABEL ILIKE 'unique%')AND userid > 1) qsON qs.query = q.query;This command also identifies queries that are spilling to disk.Related informationPerformanceAmazon Redshift system overviewFollow"
https://repost.aws/knowledge-center/redshift-high-disk-usage
How can I trigger an AWS Glue job in one AWS account based on the status of an AWS Glue job in another account?
I want to create a pipeline where the completion of an AWS Glue job in one AWS account starts a crawler in another account.
"I want to create a pipeline where the completion of an AWS Glue job in one AWS account starts a crawler in another account.Short descriptionYou can create AWS Glue Data Catalog objects called triggers. Triggers can either manually or automatically start one or more crawlers or ETL jobs, but this feature can be used only within one AWS account. You can't use these triggers to start crawlers or ETL jobs residing in another AWS account. To trigger an AWS Glue job in one AWS account based on the status of a job in another account, use Amazon EventBridge and AWS Lambda.ResolutionThe following example gives an overview of how you can use EventBridge and a Lambda function to achieve your use case. Let’s assume you have two AWS Glue jobs, where Job 1 runs in AWS Account A, and Job 2 runs in AWS Account B. Job 2 is dependent on Job 1.Create a custom event bus in AWS Account B and an EventBridge rule in AWS Account A. The EventBridge rule in Account A watches for the AWS Glue Job 1 in the SUCCEEDED state. Then, the target is the event bus created in AWS Account B.Create a Lambda function in AWS Account B that triggers AWS Glue ETL Job 2.Create an EventBridge rule in Account B with the custom event bus that you created in step 1. Add a rule that watches for AWS Glue Job 1 in the SUCCEEDED state, and the Lambda function created earlier as target. The target triggers the AWS Glue ETL Job 2 when the event arrives using AWS Glue API calls.For more information, see the list of Amazon CloudWatch Events generated by AWS Glue that can be used in EventBridge rules.Create a custom event bus in Account B1.    In Account B, open EventBridge. Choose Event buses, and then choose Create event bus. Add this resource-based policy:{ "Version": "2012-10-17", "Statement": [ { "Sid": "allow_account_to_put_events", "Effect": "Allow", "Principal": { "AWS": "<Account-A ID>" }, "Action": "events:PutEvents", "Resource": "arn:aws:events:<Account-B Region>:<Account-B ID>:event-bus/<Account-B CustomEventBus Name>" } ]}Note: Be sure to replace the example items in <> with your own details. For example, replace <Account-B CustomEventBus Name> with the name of the event bus that you created in Account B.2.    After creating the event bus, note its ARN.3.    Choose the custom event bus that you created, and then choose Actions.4.    Choose Start Discovery.Create the event rule in Account A for Job 11.    From Account A, open the EventBridge console.2.    Choose Rules, and then for Event bus, choose default.3.    Choose Create rule. Add a Name, and then for Rule type choose Rule with an event pattern.4.    On the Build event pattern page, under Creation method, choose Rule with an event pattern. Add this JSON:{ "source": ["aws.glue"], "detail-type": ["Glue Job State Change"], "detail": { "jobName": ["<Job 1 name>"], "severity": ["INFO"], "state": ["SUCCEEDED"] }}Note: Be sure to replace <Job 1 name> with the name of the AWS Glue job that you're using.5.    For Target types, choose EventBridge event bus, and then choose Event bus in another AWS account or Region.6.    Enter the ARN of the event bus that you created previously in Account B. This ARN is used as the target.7.    For Execution role, choose Create a new role for this specific resource. If you choose Use existing role instead, then make sure that your AWS Identity and Access Management (IAM) policy has the following permissions:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "events:PutEvents" ], "Resource": [ "arn:aws:events:<Account-B Region>:<Account-B ID>:event-bus/<Account-B CustomEventBus Name>" ] } ]}Note: Be sure to replace the <Account-B CustomEventBus Name> in this example with the name of the event bus that you created in Account B.8.    Choose Next, review your settings, and then choose Create.Create a Lambda function in Account B with a target that starts AWS Glue Job 21.    Open the Lambda console.2.    Choose Functions, and then choose Create function.3.    Enter a function name, and for Runtime choose Python 3.x version.4.    Under Change default execution role, choose Create a new role with basic Lambda permissions.5.    If you are using an existing role, make sure that it has the required permissions. If it doesn't, add these permissions using the IAM console. First, add the AWSGlueServiceRole (AWS management policy). Then, give Lambda the IAM permissions to run based on events:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "logs:CreateLogGroup", "Resource": "arn:aws:logs:<Account-B Region>:<Account-B ID>:*" }, { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:<Account-B Region>:<Account-B ID>:log-group:/aws/lambda/<Lambda Function Name>:*" ] } ]}Note: Be sure to replace the example items in <> with your own details. For example, replace <Account-B ID> with the account ID for Account B6.    Choose Create function.7.    In the code section of the function that you created, add this code:# Set up loggingimport jsonimport osimport logginglogger = logging.getLogger()logger.setLevel(logging.INFO)# Import Boto 3 for AWS Glueimport boto3client = boto3.client('glue')# Variables for the job: glueJobName = "<Job 2 Name>"# Define Lambda functiondef lambda_handler(event, context): logger.info('## INITIATED BY EVENT: ') response = client.start_job_run(JobName = glueJobName) logger.info('## STARTED GLUE JOB: ' + glueJobName) logger.info('## GLUE JOB RUN ID: ' + response['JobRunId']) return response.Note: Be sure to replace <Job 2 Name> with the name of the AWS Glue job that you're using in Account A.8.    Choose Deploy. You can now test the function to check that it triggers Job 2 in Account B.Create an event rule in Account B for Job 11.    From Account B, open the EventBridge console.2.    Choose Rules, and then choose the event bus that you previously created.3.    Create a new rule under the event bus. Enter a Rule name, and for Rule type, choose Rule with an event pattern.4.    On the Build event pattern page, under creation method, choose Custom pattern, and then add this JSON:{ "source": ["aws.glue"], "detail-type": ["Glue Job State Change"], "detail": { "jobName": ["<Job 1 name>"], "severity": ["INFO"], "state": ["SUCCEEDED"] }}Note: Be sure to replace <Job 1 name> with the name of the AWS Glue job that you're using in Account B.5.    On the Select Targets page, for Target types, choose AWS service.6.    For Select target, choose or type Lambda function, and then choose the function that you previously created from the dropdown list.7.    Choose Next, review your settings, and then choose Create.Test your cross-account AWS Glue job trigger1.    Run Job 1 in Account A. When the job completes, a SUCCEEDED state is sent to the event bus in Account A.2.    Account A sends the event information to the event bus in Account B.3.    The event bus in Account B runs the event rule. This event rule triggers the Lambda function in Account B. To check Lambda logs, open the Amazon CloudWatch console, choose Log groups, and then choose your Lambda function group. The function group is in the format /aws/lambda/<LambdaFunctionName>.4.    The Lambda function triggers Job 2 in Account B.Related informationSending and receiving Amazon EventBridge events between AWS accountsFollow"
https://repost.aws/knowledge-center/glue-cross-account-job-trigger
"How do I stop security group rules, listeners, or other changes from reverting on a load balancer in Amazon EKS?"
"When I try to make changes to my load balancer for Amazon Elastic Kubernetes Service (Amazon EKS), the changes automatically revert."
"When I try to make changes to my load balancer for Amazon Elastic Kubernetes Service (Amazon EKS), the changes automatically revert.Short descriptionWhen you use AWS Load Balancer Controller to create a load balancing service or an ingress resource, the controller configures many default parameters. This includes any parameters that you don’t specify in the manifest file, such as a health check path, default timeout, or security group rules.However, you can use an AWS API call to directly change the default configuration. You can make this API call from the Amazon Elastic Compute Cloud (Amazon EC2) console, AWS Command Line Interface (AWS CLI), or another third-party tool. In this case, the controller reverts these changes to their original values during the next cluster reconciliation. For more information, see Controllers and Reconciliation on the Kubernetes Cluster API website.The following issues commonly occur because of reverted load balancer changes in Amazon EKS:The load balancer’s custom security group rules automatically revert to 0.0.0.0/0, or they disappear.The load balancer automatically deletes or adds listener rules.Custom idle timeout values automatically revert to the default values.The certificate automatically reverts to the previous version.You can’t update the health check path because Amazon EKS reverted its values.Amazon EKS modifies roles through the load balancer’s properties.To troubleshoot these issues, first determine what caused your load balancer to make these changes. Specifically, find the relevant API call for the changed resource and the tool that made the call. Then, implement your changes in the manifest file.Note: In the following resolution, a “load balancer” refers to a load balancing service, such as Network Load Balancer or Classic Load Balancer. Or, a load balancer can be an ingress resource, such as Application Load Balancer.ResolutionTo define the expected state of a load balancer, you must specify changes in the manifest file’s annotations. Otherwise, the annotations force the changes to revert to the default, unchanged values.If you try to use an AWS API call to directly change these values, then the controller considers this an out-of-band change. During the next reconciliation, the controller reverts the changes to their original values to sync with your Kubernetes service manifest configuration. Depending on the attribute that the controller reverts, this might result in a long downtime for your service.AWS Load Balancer Controller uses multiple paths of logic for reconciliation. The following scenarios might cause aws-load-balancer-controller pods to restart:An upgrade for the control plane, worker node, or platformAn instance refresh due to underlying problems such as hardware failure or health issuesAny activity that results in an update, delete, or patch API call on the controller podsAutomatic, periodic reconciliationNote: By default, the controller’s reconciliation period is 1 hour. However, this feature doesn’t work on versions 2.4.7 and earlier of Amazon EKS.In these cases, AWS Load Balancer Controller initiates the reconciliation, and your load balancer refers to the most recent manifest file configuration. If you previously made any changes to your load balancer through an API call, then those changes revert.Identify the source of changesFind the API call that relates to the updated resource. Search in AWS CloudTrail for the time frame that the changes occurred. For all AWS Load Balancer API Calls, see the Elastic Load Balancing (ELB) API reference. For Amazon EC2 API calls, see the Amazon EC2 API reference.For example, if the controller reverts SecurityGroup rules, then you see that the API RevokeSecurityGroupIngress is invoked. You can then use the corresponding CloudTrail event to identify the API user. If the controller uses WorkerNode roles, then you see the node role that made the API call:...."type": "AssumedRole","arn": "arn:aws:sts::***********:assumed-role/eksctl-mycluster-NodeInstanceRole/i-***********","sessionContext": { "sessionIssuer": { "type": "Role", "arn": "arn:aws:iam::***********:role/eksctl-mycluster-nodegr-NodeInstanceRole", "userName": "eksctl-mycluster-nodegr-NodeInstanceRole" }, ... eventName ": " RevokeSecurityGroupIngress ", "userAgent": "elbv2.k8s.aws/v2.4.5 aws-sdk-go/1.42.27 (go1.19.3; linux; amd64)", "requestParameters": { "groupId": "sg-****", "ipPermissions": { "items": [{ "ipProtocol": "tcp", "fromPort": 443, "toPort": 443, "groups": {}, "ipRanges": { "items": [{ "cidrIp": "0.0.0.0/0" }] }]If you use dedicated roles for AWS Load Balancer Controller, then you see the service account’s AWS Identity and Access Management (IAM) role.Avoid unwanted changesDon’t make out-of-band changes to any parameter of your load balancer. This includes changes from the Amazon EC2 console, the AWS CLI, or any tool that directly calls AWS APIs.For example, you want to update security group rules. Use the .spec.loadBalancerSourceRanges or service.beta.kubernetes.io/load-balancer-source-ranges annotations. You can use these annotations to restrict CIDR IP addresses for a load balancer. For more information on these annotations, see Access control on the AWS Load Balancer Controller GitHub website.Use only proper annotations in the manifest file to update timeout values, health check paths, certificate ARNs, and other properties. For all supported service and ingress annotations, see Service annotations and Ingress annotations on the AWS Load Balancer Controller GitHub website.Follow"
https://repost.aws/knowledge-center/eks-load-balancer-changes-automatically-reverted
Why can't I generate a kubeconfig file for my Amazon EKS cluster?
I get an AccessDeniedException error when I try to generate a kubeconfig file for an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
"I get an AccessDeniedException error when I try to generate a kubeconfig file for an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.Short descriptionYou must have permission to use the eks:DescribeCluster API action with the cluster to generate a kubeconfig file for an Amazon EKS cluster. To get permission, attach an AWS Identity and Access Management (IAM) policy to an IAM user.ResolutionTo attach an IAM policy to an IAM user, complete the following steps:1.    Open the IAM console.2.    In the navigation pane, choose Users or Roles.3.    Select the name of the user or role to embed a policy in.4.    On the Permissions tab, choose Add inline policy.5.    Choose the JSON tab.6.    Use a text editor to replace the code with the following IAM policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "eks:DescribeCluster" ], "Resource": "*" } ]}7.    Choose Review policy.8.    For Name, enter a name for the policy. For example: eks_update-kubeconfig.Note: You can choose any name for the policy.9.    Choose Create policy.An explicit deny message indicates that if multi-factor authentication (MFA) is false, then there is an IAM policy that's denying most actions:{ "Version": "2012-10-17", "Statement": [ { "Sid": "BlockMostAccessUnlessSignedInWithMFA", "Effect": "Deny", "NotAction": [ "iam:CreateVirtualMFADevice", "iam:EnableMFADevice", "iam:ListMFADevices", "iam:ListUsers", "iam:ListVirtualMFADevices", "iam:ResyncMFADevice", "sts:GetSessionToken" ], "Resource": "*", "Condition": { "BoolIfExists": { "aws:MultiFactorAuthPresent": "false" } } } ]}Note: Because you use an MFA device, you must use an MFA token to authenticate access to AWS resources with the AWS Command Line Interface (AWS CLI). Follow the steps in the article How do I use an MFA token to authenticate access to my AWS resources through the AWS CLI? Then, run the sts get-session-token AWS CLI command.For example:$ aws sts get-session-token --serial-number arn-of-the-mfa-device --token-code code-from-tokenNote: Replace arn-of-the-mfa-device with the ARN of your MFA device and code-from-token with your token's code.You can use temporary credentials by exporting the values to environment variables.For example:$ export AWS_ACCESS_KEY_ID=example-access-key-as-in-previous-output$ export AWS_SECRET_ACCESS_KEY=example-secret-access-key-as-in-previous-output$ export AWS_SESSION_TOKEN=example-session-token-as-in-previous-outputRun the update-kubeconfig command and confirm that it updates the config file under ~/.kube/config:aws eks --region region-code update-kubeconfig --name cluster_nameNote: Replace region-code with your AWS Region's code and cluster name with your cluster's name.Follow"
https://repost.aws/knowledge-center/eks-generate-kubeconfig-file-for-cluster
How do I troubleshoot printing issues with my Windows WorkSpaces?
I'm having problems printing from Amazon WorkSpaces for Windows. How do I troubleshoot these issues?
"I'm having problems printing from Amazon WorkSpaces for Windows. How do I troubleshoot these issues?Short descriptionWindows WorkSpaces supports local printer redirection and network printers. When you print from an application in your WorkSpace, local printers are included in your list of available printers.Basic remote printingBasic remote printing is turned on by default. Basic remote printing offers limited printing by using a generic printer driver on the host side. This setting allows for compatible basic printing functionality, but you can't use all the printer's available features. Depending on the device type, basic remote printing might require a matching printer driver on the host side.Advanced remote printingAdvanced remote printing for Windows clients lets you use specific printer features, such as double-sided printing. To use specific printer features, install the matching printer driver on the host side.ResolutionTo troubleshoot printing issues in Windows WorkSpaces, follow these steps:1.    Confirm that you can print as expected from your local client computer. Do this to rule out printer hardware problems and to verify that you have the latest printer drivers installed on your local client.2.    Verify that you have the latest WorkSpaces client application. If not, update your WorkSpace's client application version. Then, attempt printing from your WorkSpace to determine if printing issues persist.3.    Reboot the WorkSpace from the WorkSpaces console. Rebooting installs or fixes WorkSpaces components that are required for printing.4.    Verify whether your WorkSpace redirects print jobs to a local client computer.For a Windows local client computer, in your WorkSpace, open Print Management. Under All Printers, verify that the redirected printer's queue is ready. Right-click the redirected printer and choose Print Test Page. Then, verify that the local client print spooler has the print job from your WorkSpace queued.For a macOS local client computer, in your WorkSpace, open Printers & scanners. Choose the redirected printer and choose Manage and Print a test page. Navigate back to Printers & scanners, choose the redirected printer, and choose Open queue. Verify that the local client print spooler has the print job from your WorkSpace queue.When you print a test page, you obtain details about the printer's properties and driver. You can use this information for additional troubleshooting.Local printers use the following naming conventions based on WorkSpace protocols:PCoIP: <local-printer-name> (Local - <username>.<computername>)WorkSpaces Streaming Protocol (WSP): <local-printer-name> - Redirected (<-computername>)For more information about local printers, see Local printers.5.    When your print jobs still aren't completing, verify if related services are running in your WorkSpace. To verify:In your local WorkSpace, open Task Manager and choose the Details tab.For a PCoIP WorkSpace, verify if the Windows print spooler service (spoolsv.exe) and Teradici PCoIP printing service (pcoip_vchan_printing_svc.exe) are running.For a WSP WorkSpace, verify if the Windows print spooler service (spoolsv.exe) is running and check if printer_host.dll exists in the location C:\Program Files\Amazon\WSP\printer_host.dll.If the Teradici PCoIP printing service isn't running or the WSP WorkSpace printer_host.dll doesn't exist, remote printers in WorkSpaces aren't redirected to your local client. To start or fix the required printing service components, reboot the WorkSpace from the WorkSpaces console.Note: Remote printing is implemented as a virtual channel. If virtual channels are turned off, then remote printing won't work.6.    If advanced remote printing features aren't working but basic printing works, check if you have the appropriate group policies configured. Match the printer drivers installed in WorkSpaces to the printer drivers installed on the local client computer. Check the driver versions by running the following command in your Windows WorkSpace:Get-PrinterDriverSometimes, a matching printer driver can't be found for the host operating system. An available printer driver might not be compatible with the printer. In these cases, you can print to those printers by selecting the basic print settings.7.    If you've completed the troubleshooting steps and haven't resolved the issue, contact AWS Support for assistance.Follow"
https://repost.aws/knowledge-center/workspaces-troubleshoot-windows-printing
Why does my AWS Glue job fail with lost nodes when I migrate a large data set from Amazon RDS to Amazon S3?
"I'm migrating a large dataset from Amazon Relational Database Service (Amazon RDS) or an on-premises JDBC database to Amazon Simple Storage Service (Amazon S3) using AWS Glue. My ETL job runs for a long time, and then fails with lost nodes."
"I'm migrating a large dataset from Amazon Relational Database Service (Amazon RDS) or an on-premises JDBC database to Amazon Simple Storage Service (Amazon S3) using AWS Glue. My ETL job runs for a long time, and then fails with lost nodes.Short descriptionAWS Glue uses a single connection to read the entire dataset. If you're migrating a large JDBC table, the ETL job might run for a long time without signs of progress on the AWS Glue side. Then, the job might eventually fail because of disk space issues (lost nodes). To resolve this issue, read the JDBC table in parallel. If the job still fails with lost nodes, use an SQL expression as a pushdown predicate.ResolutionUse one or more of the following methods to resolve lost node errors for JDBC datasets.Read the JDBC table in parallelIf the table doesn't have numeric columns (INT or BIGINT), then use the hashfield option to partition the data. Set hashfield to the name of a column in the JDBC table. For best results, choose a column with an even distribution of values.If the table has numeric columns, set the hashpartitions and hashexpression options in the table or while creating the DynamicFrame:hashpartitions: defines the number of tasks that AWS Glue creates to read the datahashexpression: divides rows evenly among tasksThe following is an example of how to set hashpartitions and hashexpression while creating a DynamicFrame with a JDBC connection. In the connection_option, replace the JDBC URL, user name, password, table name, and column name.connection_option= {"url": "jdbc:mysql://mysql–instance1.123456789012.us-east-1.rds.amazonaws.com:3306/database", "user": "your_user_name", "password": "your_password","dbtable": "your_table","hashexpression":"column_name","hashpartitions":"10"}datasource0 = glueContext.create_dynamic_frame.from_options('mysql',connection_options=connection_option,transformation_ctx = "datasource0")Here's an example of how to set hashpartitions and hashexpression while creating a DynamicFrame from the AWS Glue catalog:datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "your_database", table_name = "your_table",additional_options={"hashexpression":"column_name","hashpartitions":"10"}, transformation_ctx = "datasource0")Note: Setting larger values for hashpartitions can reduce your table's performance. That's because each task reads the entire table, and then returns a set of rows to the executor.For more information, see Reading from JDBC tables in parallel.Use a SQL expression as a pushdown predicateNote: The following SQL expression doesn't work as a pushdown predicate for Oracle databases. However, this expression does work as a pushdown predicate for all other databases that are natively supported by AWS Glue (Amazon Aurora, MariaDB, Microsoft SQL Server, MySQL, and PostgreSQL).If the table contains billions of records and tebibytes (TiB) of data, the job might take a long time to complete or fail with lost nodes, even after you set hashpartitions and hashexpression. To resolve these issues, use a SQL expression similar to the following with the hashexpression option:column_name > 1000 AND column_name < 2000 AND column_nameThe SQL expression acts as a pushdown predicate and forces the job to read one set of rows per job run, rather than reading all the data at once. The full statement looks similar to the following:datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "sampledb", table_name = "test_table",additional_options={"hashexpression":"column_name > 1000 AND column_name < 2000 AND column_name","hashpartitions":"10"}, transformation_ctx = "datasource0")Note: Be sure to turn off job bookmarks for initial job runs with this configuration. When you run a job with a job bookmark, AWS Glue records the maximum value of the column. When you run the job again, AWS Glue processes only the rows that have values greater than the previous bookmark value. You can turn on job bookmarks during the last job run as needed.Related informationWhy is my AWS Glue job failing with the error "Exit status: -100. Diagnostics: Container released on a *lost* node"?Defining connections in the AWS Glue Data CatalogFollow"
https://repost.aws/knowledge-center/glue-lost-nodes-rds-s3-migration
How do I troubleshoot OutOfMemory errors in Amazon ECS?
I want to troubleshoot memory usage issues in my Amazon Elastic Container Service (Amazon ECS) task.-or-The containers in my Amazon ECS task are exiting due to OutOfMemory error.
"I want to troubleshoot memory usage issues in my Amazon Elastic Container Service (Amazon ECS) task.-or-The containers in my Amazon ECS task are exiting due to OutOfMemory error.Short descriptionBy default, a container has no resource constraints and can use as much resources as the host’s kernel scheduler allows. With Docker, you can control the amount of memory used by a container. Be sure not to allow a running container to consume most of the host machine’s memory. On Linux hosts, when the kernel detects that there isn't enough memory to perform important system functions, it throws an OutOfMemory exception and starts to end the processes to free up memory.With Docker, you might use either of the following:Hard memory limits that allow the container to use no more than a certain amount of user or system memorySoft limits that allow the container to use as much memory as required unless certain conditions, such as low memory or contention on the host machine, occurWhen an Amazon ECS task is ended because of OutOfMemory issues, you might receive the following error message:OutOfMemoryError: Container killed due to memory usageYou get this error when a container in your task exits because the processes in the container consume more memory than the amount that was allocated in the task definition.ResolutionTo troubleshoot OutOfMemory errors in your Amazon ECS task, do the following:Check the stopped task for errors in the Amazon ECS console. Check the Stopped reason field for the error code OutOfMemory.Turn on Amazon CloudWatch Logs for your tasks to debug application level issues that occur due to memory usage.View the service’s memory use in either the Amazon ECS console or CloudWatch console.Use CloudWatch Container Insights to monitor memory use. You can view the memory usage of a certain container for a certain period of time with a query similar to the following:stats max(MemoryUtilized) as mem, max(MemoryReserved ) as memreserved by bin (5m) as period, TaskId, ContainerName| sort period desc | filter ContainerName like “example-container-name” | filter TaskId = “example-task-id”To mitigate the risk of task instability due to OutOfMemory issues, do the following:Perform tests to understand the memory requirements of your application before placing the application in production. You can perform a load test on the container within a host or server. Then, you can check the memory usage of the containers using docker stats.Be sure that your application runs only on hosts with adequate resources.Limit the amount of memory that your container can use. You can do this by setting appropriate values for hard limit and soft limit for your containers. Amazon ECS uses several parameters for allocating memory to tasks: memoryReservation for soft limit and memory for hard limit. When you specify these values, they are subtracted from the available memory resources for the container instance where the container is placed.Note: The parameter memoryReservation isn't supported for Windows containers.You can turn on swap for containers with high transient memory demands. Doing so reduces the chance of OutOfMemory errors when the container is under high load.Note: If you're using tasks that use the AWS Fargate launch type, then parameters maxSwap and sharedMemorySize aren't supported.Important: Be aware of when you configure swap on your Docker hosts. Turning on swap can slow down your application and reduce the performance. However, this feature prevents your application from running out of system memory.To detect Amazon ECS tasks that were ended because of OutOfMemory events, use the following AWS CloudFormation template. With this template, you can create an Amazon EventBridge rule, Amazon Simple Notification Service (Amazon SNS) topic, and an Amazon SNS topic policy. When you run the template, the template asks for an email list, topic name, and a flag to turn monitoring on or off.AWSTemplateFormatVersion: 2010-09-09Description: > - Monitor OOM Stopped Tasks with EventBridge rules with AWS CloudFormation.Parameters: EmailList: Type: String Description: "Email to notify!" AllowedPattern: '[a-zA-Z0-9]+@[a-zA-Z0-9]+\.[a-zA-Z]+' Default: "mail@example.com" SNSTopicName: Type: String Description: "Name for the notification topic." AllowedPattern: '[a-zA-Z0-9_-]+' Default: "oom-monitoring-topic" MonitorStatus: Type: String Description: "Enable / Disable monitor." AllowedValues: - ENABLED - DISABLED Default: ENABLEDResources: SNSMonitoringTopic: Type: AWS::SNS::Topic Properties: Subscription: - Endpoint: !Ref EmailList Protocol: email TopicName: !Sub ${AWS::StackName}-${SNSTopicName} SNSMonitoringTopicTopicPolicy: Type: AWS::SNS::TopicPolicy Properties: Topics: - !Ref SNSMonitoringTopic PolicyDocument: Version: '2012-10-17' Statement: - Sid: SnsOOMTopicPolicy Effect: Allow Principal: Service: events.amazonaws.com Action: [ 'sns:Publish' ] Resource: !Ref SNSMonitoringTopic - Sid: AllowAccessToTopicOwner Effect: Allow Principal: AWS: '*' Action: [ 'sns:GetTopicAttributes', 'sns:SetTopicAttributes', 'sns:AddPermission', 'sns:RemovePermission', 'sns:DeleteTopic', 'sns:Subscribe', 'sns:ListSubscriptionsByTopic', 'sns:Publish', 'sns:Receive' ] Resource: !Ref SNSMonitoringTopic Condition: StringEquals: 'AWS:SourceOwner': !Ref 'AWS::AccountId' EventRule: Type: AWS::Events::Rule Properties: Name: ECSStoppedTasksEvent Description: Triggered when an Amazon ECS Task is stopped EventPattern: source: - aws.ecs detail-type: - ECS Task State Change detail: desiredStatus: - STOPPED lastStatus: - STOPPED containers: reason: - prefix: "OutOfMemory" State: !Ref MonitorStatus Targets: - Arn: !Ref SNSMonitoringTopic Id: ECSOOMStoppedTasks InputTransformer: InputPathsMap: taskArn: $.detail.taskArn InputTemplate: > "Task '<taskArn>' was stopped due to OutOfMemory."After you create the CloudFormation stack, you can verify your email to confirm the subscription. After a task is ended due to OutOfMemory issue, you get an email with a message similar to the following:"Task 'arn:aws:ecs:eu-west-1:555555555555:task/ECSFargate/0123456789abcdef0123456789abcdef' was stopped due to OutOfMemory."Related informationHow do I troubleshoot issues with containers exiting in my Amazon ECS tasks?Follow"
https://repost.aws/knowledge-center/ecs-resolve-outofmemory-errors
Why is my query planning time so high in Amazon Redshift?
My query planning time in Amazon Redshift is much longer than the actual execution time. Why is this happening?
"My query planning time in Amazon Redshift is much longer than the actual execution time. Why is this happening?Short DescriptionIf there are queries with exclusive locks on a production load, the lock wait time can increase. This causes your query planning time in Amazon Redshift to be much longer than the actual execution time. Check the Workload Execution Breakdown metric to see if there is a sudden increase in query planning time. This increase in time is likely caused by a transaction that's waiting for a lock.ResolutionTo detect a transaction that's waiting for a lock, perform the following steps:1.    Open a new session for your first lock:begin; lock table1;2.    Open a second session that runs in parallel:select * from table1 limit 1000;The query in this second session submits an AccessSharedLock request. However, the query must wait for the AccessExclusiveLock, because the first session has already claimed it. The ExclusiveLock then blocks all other operations on table1.3.    Check your Workload Execution Breakdown metrics. A sudden spike in query planning time confirms that there is a transaction waiting for a lock.4.    (Optional) If a transaction waiting for a lock exists, then release the lock by manually terminating the session:select pg_terminate_backend(PID);For more information about releasing locks, see How do I detect and release locks in Amazon Redshift?Related InformationAnalyzing Workload PerformanceQuery Planning and Execution WorkflowFollow"
https://repost.aws/knowledge-center/redshift-query-planning-time
How do I enable DKIM for Amazon SES?
I want to enable DomainKeys Identified Mail (DKIM) for the messages that I send using Amazon Simple Email Service (Amazon SES). How can I do that?
"I want to enable DomainKeys Identified Mail (DKIM) for the messages that I send using Amazon Simple Email Service (Amazon SES). How can I do that?Short descriptionDKIM is a method that allows receiving mail servers to validate the authenticity of the received email. Using DKIM, senders digitally sign email using a private key. The receiving mail servers then validate email by matching the digital signature with the public key that's published in the sender domain's DNS records.Important: Before you enable DKIM, you must complete the verification process for an Amazon SES identity.ResolutionUse Amazon SES Easy DKIM, or Bring Your Own DKIM (BYODKIM) to sign your Amazon SES email with a 1024-bit DKIM key.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Set up Easy DKIMEasy DKIM allows you to configure DKIM authentication for email sent using a certain Amazon SES verified identity (domain or email address). For instructions, see Setting up Easy DKIM for a domain or Setting up Easy DKIM for an email address.After Amazon SES verifies your DNS records, the DKIM Verification Status shown in the Amazon SES console changes to verified. To troubleshoot a failed verification status, see Why is my DKIM domain failing to verify on Amazon SES?Provide your own public-private key pair (BYODKIM)You can use your own DKIM authentication token for email sent using an Amazon SES verified domain. To configure BYODKIM, first install and configure the AWS CLI. Then, using Amazon SES API v2, proceed with the steps to configure an Amazon SES verified domain with BYODKIM.After you complete the steps to set up BYODKIM, it can take up to 72 hours for the DKIM status to change to SUCCESS.If the DKIM status is FAILED, then review your public-private key pair and the TXT record. Check for the following:Look for any errors in the updated key.Confirm that there aren't any line breaks in the key.Confirm that your domain isn't listed twice.Confirm that the key is 1024 bits.Note: If you need a key that's larger than 1024 bits, then consider setting up manual DKIM signing in Amazon SES.After you correct any errors or confirm that there are no errors, retry the BYODKIM configuration process.Manually add a DKIM signatureYou can also manually add DKIM signatures to your messages, and then use Amazon SES to send the messages. For more information, see Manual DKIM signing in Amazon SES.Note: When you sign your messages, it's a best practice to use a bit length of at least 1024 bits.Follow"
https://repost.aws/knowledge-center/ses-enable-dkim
How do I troubleshoot my DOWN Direct Connect connection in the AWS console?
I want to troubleshoot my AWS Direct Connect when it goes down in the AWS console.
"I want to troubleshoot my AWS Direct Connect when it goes down in the AWS console.ResolutionTo troubleshoot your down Direct Connect connection for an existing or new connection, complete the following steps:Existing connectionIf your existing connection has gone down, then complete the following steps:1.    If your connection was previously UP, then check your Personal Health Dashboard for notifications of planned maintenance or unplanned outages.2.    Reach out to your partner to check if they have planned or unplanned network outages.3.    Check your connection light levels. Levels should be within the range of -14.4 and 2.50 decibel-milliwatts for 1G and 10G Direct Connect connections.4.    If you still have issues, then contact AWS Support and provide the optical signal report from the colocation provider.New connectionIf you're attempting to establish a new connection and you're having issues, then complete the following steps:1.    Confirm with the colocation provider that the cross connect is complete. Obtain a cross connect completion notice from your colocation or network provider and compare the ports with those listed on your LOA-CFA.2.    Check that your router or your provider's router is powered on and that the ports are activated.3.    Make sure that the routers are using the correct optical transceiver fiber type:Single-mode fiber with a 1000BASE-LX (1310 nm) transceiver for 1 Gigabit.Single-mode fiber with a 10GBASE-LR (1310 nm) transceiver for 10 Gigabit.Single-mode fiber with a 100GBASE-LR4 transceiver for 100 Gigabit ethernet.4.    Make sure auto-negotiation for ports with speeds faster than 1 Gbps is turned off.Note: Depending on the Direct Connect endpoint serving your connection, auto-negotiation can be turned on or turned off for 1 Gpbs connections. If it needs to be turned off for your connections, then port speeds and full-duplex mode must be manually configured.5.    Check that the router is receiving an acceptable optical signal over the cross connect.6.    Roll the Tx/Rx fiber strands if needed.7.    Check the Amazon CloudWatch metrics. Check the Direct Connect device's Tx/Rx optical readings (10 Gbps port speeds only), physical error count, and operational status.8.    Reach out to the colocation provider and request a written report for the Tx/Rx optical signal across the cross connect.9.    Request that the service provider/co-location partner perform the following loop tests at the Meet-Me-Room (MMR):Do a loopback at the (MMR) towards the customer router. If the port on the on-premises device comes UP, then the link up to MMR is fine. You can also check the Tx/Rx light levels on your device. To check if the light levels are in range, use the show interfaces eth1 transceiver command or your device specific command.Do a loopback at the MMR towards AWS router and leave it on for at least 10 minutes. If Layer-1 (cable and transceiver) is good, then the port should come UP on the AWS side. Confirm this by using the ConnectionState CloudWatch metric.10.    If you still have issues, then contact AWS Support and provide the cross connect completion notice and the optical signal report from the colocation provider.    Related informationTroubleshooting layer 1 (physical) issuesFollow"
https://repost.aws/knowledge-center/direct-connect-down-connection
How can I troubleshoot slow loading times when I use a web browser to download an object stored in Amazon S3?
"I'm trying to download an object from Amazon Simple Storage Service (Amazon S3) using a web browser, but the download is slow."
"I'm trying to download an object from Amazon Simple Storage Service (Amazon S3) using a web browser, but the download is slow.ResolutionTo identify the cause of slow download times from Amazon S3 in a web browser, check the following potential issues.Low internet bandwidthVerify the network speed that you get from your internet service provider (ISP). If the speed is lower, then it might cause a bottleneck when you try to connect to the S3 bucket and download objects.Large object sizeIf some S3 objects take longer to download than other objects, then check the size of the objects that take a longer time to download. For very large Amazon S3 objects, you might notice slow download times when your web browser tries to download the entire object. Instead, try downloading large objects with a ranged GET request using the Amazon S3 API. Because a ranged GET request lets you download a large object in separate, smaller chunks, it can help you avoid latency.Geographical distance between the clients and the Amazon S3 bucketIf you have clients from different parts of the world that download from your S3 bucket, then those clients' locations might impact download speed. Clients that are geographically distant from the AWS Region of your bucket might experience slower download times. To improve download times for geographically distant clients, you can take the following actions:Serve your S3 objects from an Amazon CloudFront distribution. CloudFront can serve your clients from an edge location that's geographically closer to them, and therefore minimize latency.Move your bucket to a Region that's geographically closer to your clients. You can use cross-Region replication to copy objects from the source bucket into the destination bucket in another Region.Intermediate network-related issuesNetwork-related issues such as packet loss, high number of hops, or any other ISP-related issue can affect Amazon S3 download times.To determine if a network-related issue contributes to the slow downloads, use tools such as mtr and traceroute. These tools can help identify possible network issues when sending packets to a remote host. For example, the following traceroute command sends a TCP traceroute to the Amazon S3 endpoint in us-east-1 over port 80:sudo traceroute -P TCP -p 80 s3.us-east-1.amazonaws.comNote: Because many network devices don't respond over ICMP, it's a best practice to run a TCP traceroute.Workstation resourcesConfirm that there's no resource contention within your workstation (for example, CPU, memory, or network bandwidth) that might contribute to the overall latency.Depending on your operating system, you can use tools such as Resource Monitor (from the Microsoft website) or the top command to check the resource usage on most client systems.Isolate processing time from Amazon S3To help identify what's contributing to the slow download times, isolate the processing time from Amazon S3. Activate server access logging, and then review logs for Total Time. This shows how long Amazon S3 takes to process the request.You can also analyze the Amazon CloudWatch metric FirstByteLatency. FirstByteLatency shows how long it takes for Amazon S3 to process the request from the client and then send the response to the client. This CloudWatch metric provides a bucket-level perspective of performance.Note: Amazon S3 CloudWatch request metrics are billed at the same rate as custom metrics.Follow"
https://repost.aws/knowledge-center/s3-download-slow-loading-web-browser
How do I troubleshoot InstanceLimitExceeded errors when starting or launching an EC2 instance?
I’m can't start or launch my Amazon Elastic Compute Cloud (Amazon EC2) instance. I receive the following error: "InstanceLimitExceeded: Your quota allows for 0 more running instance(s).”
"I’m can't start or launch my Amazon Elastic Compute Cloud (Amazon EC2) instance. I receive the following error: "InstanceLimitExceeded: Your quota allows for 0 more running instance(s).”ResolutionThe InstanceLimitExceeded error indicates that you reached the limit on the number of running On-Demand Instances that you can launch in an AWS Region.You can request an instance limit increase on a per-Region basis. For more information, see Amazon EC2 service quotas.Related informationOn-Demand Instance limitsHow do I request an EC2 vCPU limit increase for my On-Demand Instance?Why can't I start or launch my EC2 instance?Follow"
https://repost.aws/knowledge-center/ec2-InstanceLimitExceeded-error
Why is the SysMemoryUtilization so high on my Amazon OpenSearch Service cluster?
I noticed that the SysMemoryUtilization on my Amazon OpenSearch Service cluster is above 90%. Why is my system memory utilization usage so high?
"I noticed that the SysMemoryUtilization on my Amazon OpenSearch Service cluster is above 90%. Why is my system memory utilization usage so high?ResolutionSystem memory utilization that is above 90% doesn't indicate any heap usage issues or an overloaded OpenSearch Service cluster. A system memory utilization above 90% is considered normal, especially for nodes running OpenSearch Service. Therefore, you don't need to scale up the size of your cluster.Most of the memory used by OpenSearch Service is for in-memory data structures. OpenSearch Service uses off-heap buffers for efficient and fast access to files. The Java virtual machine (JVM) also requires some memory.To determine whether your cluster needs to be scaled up, use an Amazon CloudWatch alarm on these metrics:JVM memory pressureCPU utilizationFree storage spaceFor more information about setting CloudWatch alarms, see Recommended CloudWatch alarms.Follow"
https://repost.aws/knowledge-center/opensearch-high-sysmemoryutilization
How can I calculate the maximum IOPS and throughput for an Amazon EBS volume?
"I have an Amazon Elastic Block Store (Amazon EBS) volume, and I want to calculate the maximum available IOPS and throughput for my volume."
"I have an Amazon Elastic Block Store (Amazon EBS) volume, and I want to calculate the maximum available IOPS and throughput for my volume.Short descriptionAmazon EBS has a number of different volume types, and each type has different characteristics. An EBS volume can experience a high latency if it's hitting its IOPS or throughput limit. To troubleshoot issues on an EBS volume, you must be able to calculate the maximum available IOPS and throughput limit for your EBS volume.The IOPS and throughput available for an EBS volume can vary for a number of reasons, such as volume type, volume size, provisioned IOPS, and so on. You can use a simple interactive bash script to find the maximum available IOPS and throughput for your EBS volume. Then, you can analyze the performance of your EBS volume based on these limits.ResolutionTo run the volume_Limit_calculator script, do the following:1.    Download the script to your Linux instance, and make the script executable:# chmod +x volume_Limit_calculator.sh2.    Run the script:# ./volume_Limit_calculator.shNote: The script above doesn't calculate IOPS and throughput for io2 block express volumes.Related informationAmazon EBS volume typesHow do I optimize the performance of my Amazon EBS volumes?Follow"
https://repost.aws/knowledge-center/ebs-maximum-iops-throughput
How can I defend against DDoS attacks with Shield Standard?
I want to protect my application from Distributed Denial of Service (DDoS) attacks with AWS Shield Standard.
"I want to protect my application from Distributed Denial of Service (DDoS) attacks with AWS Shield Standard.Short descriptionAWS Shield Standard is a managed threat protection service that protects the perimeter of your application. Shield Standard provides automatic threat protection at no additional charge. You can use Shield Standard to protect your application at the edge of the AWS network using Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53. These AWS services receive protection against all known network and transport layer attacks. To defend against layer 7 DDoS attacks, you can use AWS WAF.To protect your application from DDoS attacks with Shield Standard, it's a best practice to follow these guidelines for your application architecture:Reduce the attack area surfaceBe ready to scale and absorb the attackSafeguard exposed resourcesMonitor application behaviorCreate a plan for attacksResolutionReduce the attack area surfaceTo make sure that only expected traffic reaches your application, use network access control lists (network ACLs) and security groups.Use the AWS managed prefix list for CloudFront. You can limit the inbound HTTP or HTTPS traffic to your origins from only the IP addresses that belong to CloudFront origin-facing servers.Deploy the backend resources hosting your application inside private subnets.To reduce the likelihood of malicious traffic reaching your application directly, avoid allocating Elastic IP addresses to your backend resources.For more information, see Attack surface reduction.Be ready to scale and absorb the DDoS attackProtect your application at the edge of the AWS network using CloudFront, Global Accelerator, and Route 53.Absorb and distribute excess traffic with Elastic Load Balancing.Scale horizontally on-demand with AWS Auto Scaling.Scale vertically by using the optimal Amazon Elastic Compute Cloud (Amazon EC2) instance types for your application.Activate enhanced networking on your Amazon EC2 instances.Activate API caching to enhance responsiveness.Optimize caching on CloudFront.Use CloudFront Origin Shield to further reduce requests for caching content to the origin.For more information, see Mitigation techniques.Safeguard exposed resourcesConfigure AWS WAF with a rate-based rule in block mode to defend against request flood attacks.Note: You must have CloudFront, Amazon API Gateway, Application Load Balancer, or AWS AppSync configured to use AWS WAF.Use CloudFront geographic restrictions to prevent users originating from countries that you don't want to access your content.Use burst limits for each method with your Amazon API Gateway REST APIs to protect your API endpoint from being overwhelmed by requests .Use origin access identity (OAI) with your Amazon Simple Storage Service (Amazon S3) buckets.Set up the API key as the X-API-Key header of each incoming request to protect your Amazon API Gateway against direct access.Monitor application behaviorCreate Amazon CloudWatch dashboards to establish a baseline of your application's key metrics such as traffic patterns and resource use.Enhance the visibility of your CloudWatch logs with the Centralized Logging solution.Configure CloudWatch alarms to automatically scale the application in response to a DDoS attack.Create Route 53 health checks to monitor the health of your application and manage traffic failover for your application in response to a DDoS attack.For more information, see AWS Application Auto Scaling monitoring.Create a plan for DDoS attacksDevelop a runbook in advance so that you can respond to DDoS attacks in an efficient and timely manner. For guidance on creating a runbook see the AWS security incident response guide. You can also review this example runbook.Use the aws-lambda-shield-engagement script to quickly log a ticket to AWS Support during an impacting DDoS attack.Shield Standard offers protection against infrastructure-based DDoS attacks occurring at layers 3 and 4 of the OSI model. To defend against layer 7 DDoS attacks, you can use AWS WAF.For more information on how to protect your application from DDoS attacks, see AWS best practices for DDoS resiliency.Related informationHow to help protect dynamic web applications against DDoS attacks by using CloudFront and Route 53How to protect your web application against DDoS attacks by using Route 53 and an external content delivery networkHow to protect a self-managed DNS service against DDoS attacks using AWS Global Accelerator and AWS Shield AdvancedTesting and tuning your AWS WAF protectionsHow can I simulate a DDoS attack to test Shield Advanced?Follow"
https://repost.aws/knowledge-center/shield-standard-ddos-attack
How can I fix an Amazon RDS DB instance that is stuck in the incompatible-parameters status?
My Amazon Relational Database Service (Amazon RDS) instance is stuck in an incompatible-parameters state. I can't connect to the DB instance or modify it. All I can do is delete it or reboot. How can I fix this?
"My Amazon Relational Database Service (Amazon RDS) instance is stuck in an incompatible-parameters state. I can't connect to the DB instance or modify it. All I can do is delete it or reboot. How can I fix this?Short descriptionAn Amazon RDS DB instance in the incompatible-parameters state means that least one of the parameters in the associated group is set with a value that's not compatible with the current engine version or DB instance class.This can be caused by:A DB instance that's scaled to use an instance type that has less memory available than the previous one. At least one of the memory settings in the associated parameter group exceeds the memory size available for the current DB instance.A database engine that's upgraded to a different version. The engine is no longer compatible with one or more parameter settings of the current custom parameter group.Configurations can fail if you attempt to associate a different parameter group, scale the DB instance type, change the engine version, or modify the DB instance configuration. To accept a new configuration, DB instances must be in the available state. If the DB instance is in an incompatible-parameters state, then you can only reboot or delete it.For information about how to determine which values are incompatible, see How do I identify which Amazon RDS DB parameters are in custom parameter groups and which are in default parameter groups?ResolutionAmazon RDS doesn't directly identify and provide the incompatible parameter in the parameter group attached to Amazon RDS that causes the incompatible-parameter state. This state is a Terminal state that requires you to fix the incompatible parameters. To resolve this issue, change the value of each incompatible parameter to a compatible value using one of the following options:Reset all the parameters in the parameter group to the default value.Reset the values of the parameters that are incompatible.Note: All the DB instances associated with the incompatible parameter group are affected by these value changes. To back up the current parameter group settings, copy the parameter group before resetting the parameters.To identify the root cause of the issue, copy the incompatible parameter group and then compare the differences between custom parameter values and default values. For example, max_connections is a system-default value. If you compare a custom parameter group that has a custom value set for max_connections parameter to a default parameter group, then you see the default value and custom value for this parameter to compare the difference.Note: When you compare custom parameter group with a default parameter group, you see only the default values of the system-default parameters under the Default Parameter group. The default values of the engine-default parameters aren't displayed, because engine-default parameter values are specific to the engine version and configuration settings of your RDS.You can use AWS CloudTrail to check changes that have occurred to your custom parameter group. Filter the Event name for ModifyDBParameterGroup or ModifyDBClusterParameterGroup within the last 90 days.To create a copy of the parameter group using the Amazon RDS consoleOpen the Amazon RDS console, and then choose Parameter groups from the navigation pane.Select the incompatible parameter group, and then choose Parameter group actions.Choose Copy.To reset all the parameters in the parameter group to default values using the Amazon RDS consoleOpen the Amazon RDS console, and then choose Parameter groups from the navigation pane.Choose the parameter group that you want to reset.Choose Parameter group actions, and then choose Reset.Choose Reset.To reset parameter values using the Amazon RDS consoleTo avoid resetting all the parameter values of the incompatible parameter group, you can choose which parameters to change. You can do this by editing the incompatible parameter group from the Amazon RDS console.Open the Amazon RDS console, and then choose Parameter groups from the navigation pane.Select the incompatible parameter groups (or to reset all parameters, select all the parameters).Choose Parameter group actions, and then choose Edit.Enter the valid parameter values, and then choose Save Changes.Reboot the DB instance without failover to apply new settings.Note: The Amazon RDS console allows you to change parameters to any related allowed values. The AWS Command Line Interface (AWS CLI) allows you to reset target parameters to their default values. Changes to parameter values using the AWS CLI to a value other than the default parameter value have no effect.For more information about Oracle parameters that are incompatible with Amazon RDS, see Administering your Oracle DB instance and Using HugePages for an Oracle DB instance.Related informationViewing Amazon RDS DB instance statusHow do I resolve issues with an Amazon RDS database that is in an incompatible-network state?Follow"
https://repost.aws/knowledge-center/rds-incompatible-parameters
How do I troubleshoot errors when I import data into my Amazon SageMaker Studio using SageMaker Data Wrangler?
I'm getting errors when I try to import data from Amazon Simple Storage Service (Amazon S3) or Amazon Athena using Amazon SageMaker Data Wrangler.
"I'm getting errors when I try to import data from Amazon Simple Storage Service (Amazon S3) or Amazon Athena using Amazon SageMaker Data Wrangler.ResolutionLifecycle permission errorWhen you try to import data from Amazon Athena into Data Wrangler, you might get the following error:S3LifecyclePermissionError: You don't have permission to read expiration rules from the bucket that you specified.This error occurs because the SageMaker execution role associated with the user profile doesn't have the required permissions to access the Amazon S3 Lifecycle configurations for managing data retention and expiration.To resolve this error, add the following AWS Identity and Access Management (IAM) policy to the SageMaker execution role (Example: AmazonSageMaker-ExecutionRole-xxxxxxxxxxxxxxx):{ "Version": "2012-10-17", "Statement": [ { "Sid": "LifecycleConfig", "Effect": "Allow", "Action": [ "s3:GetLifecycleConfiguration", "s3:PutLifecycleConfiguration" ], "Resource": "*" } ]}For Resource, you can include only those Region-specific buckets that must be accessed. GetBucketLifecycleConfiguration returns the lifecycle configuration information set on the bucket, while PutBucketLifecycleConfiguration creates a new Lifecycle configuration for the bucket.Access denied errorYou might get the following error when you run a processing job with unencrypted output settings.com.amazonaws.services.s3.model.AmazonS3Exception: Access DeniedYou might get this error because of the following reasons:The SageMaker execution role doesn't have the required permissions to perform S3 operations.Either the S3 bucket policy or Amazon Virtual Private Cloud (Amazon VPC) endpoint policy has explicitly denied permissions for PutObject. This might be the case if you enforced only encrypted connections to the S3 bucket by providing a specific AWS Key Management Service (AWS KMS) key.To resolve this error, do the following:Check if the SageMaker execution role has minimum permissions for S3 bucket operations:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:ListBucket", "s3:CreateBucket", ], "Resource": [ "arn:aws:s3:::sagemaker-us-east-1-1111222233334444", "arn:aws:s3:::sagemaker-us-east-1-1111222233334444/*" ] } ]}Be sure that S3 bucket policy or VPC endpoint policy doesn't explicitly deny the required permissions for S3 operations.Consider passing the AWS KMS key to the processing job that allows to decrypt objects in the S3 bucket from where the data is imported.Consider using a different S3 bucket for importing your data that's encrypted at rest using the Amazon S3 server-side encryption.Follow"
https://repost.aws/knowledge-center/sagemaker-studio-import-data-wrangler
How do I troubleshoot outbound caller ID issues in Amazon Connect?
"In Amazon Connect, my caller ID isn't displaying as expected when making outbound calls."
"In Amazon Connect, my caller ID isn't displaying as expected when making outbound calls.Short descriptionYou can specify either a caller ID name or a caller ID number in your Amazon Connect instance. You can set a caller ID name or number in queue or by using the Call phone number block. For more information, see How to specify a custom caller ID number using a Call phone number block.Important: If you're using the Call phone number block, then you can set the custom caller ID number. However, to use this in your Amazon Connect instance you must first activate the custom caller ID in the block. To activate a custom caller ID, you must create a support case. If the instance isn't activated, then you can't set the custom caller ID in the block.ResolutionOutbound caller ID as a nameIt's a best practice to register your outbound caller ID name and phone number in a CNAM database for US and Canada numbers. To register your US-based phone number and company name in the CNAM database of the Amazon Connect carrier, you must create a support case.Note: Amazon Web Services (AWS) offers the functionality of configuring a display name for outbound calls. However, AWS can't guarantee that the same caller ID is shown to the recipient of the call.Outbound caller ID as a numberTo troubleshoot outbound caller IDs as a number, consider the following:Make sure that the queue and contact flow where the outbound caller ID is set is correctly associated with the agent. If caller ID is set in the queue, then verify that the agent is assigned to the routing profile that's set for the caller ID.If you're setting the caller ID in queue settings and the Call phone number block, then the call phone number block takes priority. If you didn't define the call phone number block, then the caller ID in queue settings are considered. For more information, see Set up outbound caller ID.Note: The caller ID display number is offered on a best effort basis and isn't guaranteed. However, when setting the caller ID in the queue, make sure to select the DID numbers that are configured in the Amazon Connect instance. If a TFN number is selected, then the caller ID might be flagged as a SPAM call by the telecom regulators. For more information, see Set up outbound caller ID.Follow"
https://repost.aws/knowledge-center/connect-outbound-caller-ID-issues
How do I troubleshoot issues with my Elastic IP address on my EC2 instances?
"I'm receiving errors when allocating or releasing an Elastic IP address associated with my Amazon Elastic Compute Cloud (Amazon EC2) instance. Or, I need to restore an Elastic IP address I accidentally deleted. How can I troubleshoot common issues with my Elastic IP address?"
"I'm receiving errors when allocating or releasing an Elastic IP address associated with my Amazon Elastic Compute Cloud (Amazon EC2) instance. Or, I need to restore an Elastic IP address I accidentally deleted. How can I troubleshoot common issues with my Elastic IP address?Short descriptionThe following are common issues that might occur with an Elastic IP address in your AWS account:I want to restore an accidentally deleted Elastic IP address.An associated Elastic IP address isn't released, even after terminating the EC2 instance.I'm being charged for an Elastic IP address even though it isn't associated with any of my resources.When allocating a new Elastic IP address, I'm getting the error: "Elastic IP address could not be allocated. The maximum number of addresses has been reached."When associating an Elastic IP address to one of my EC2 instances, I'm getting the error: "Elastic IP address could not be associated. You are not authorized to perform this operation."When releasing an Elastic IP address from my account, I'm getting the error: "Elastic IP address could not be released. You do not have permission to access the specified resource."ResolutionI want to restore an accidently deleted Elastic IP addressIf you released your Elastic IP address, you might be able to recover it. For more information, see Recover an Elastic IP address.An associated Elastic IP address isn't released, even after terminating the EC2 instanceTo release an Elastic IP address, you must first disassociate it from any resources. For more information, see Disassociate an Elastic IP address.After you disassociate the Elastic IP address, you can re-associate it with a different resource. You incur charges for any Elastic IP address that's allocated for use with a VPC but not associated with an instance. If you don't need the Elastic IP address, you can release it. For more information, see Release an Elastic IP address.I'm being charged for an Elastic IP address even when it is not associated to any of my resources.If you receive bills for your Elastic IP address that aren't associated with a resource, see Why am I being billed for Elastic IP addresses when all my Amazon EC2 instances are terminated?When allocating a new Elastic IP address, I'm receiving the error "Elastic IP address could not be allocated. The maximum number of addresses has been reached"All AWS accounts are limited to five Elastic IP addresses per Region. If you receive the error The maximum number of addresses has been reached, verify how many Elastic IP address you're using and what the limit is for your account.If you need additional Elastic IP addresses, request a quota increase. When creating the quota increase request, search for EC2-VPC Elastic IPs on the AWS Services tab.When associating an Elastic IP address to one of my EC2 instances, I'm getting the error: "Elastic IP address could not be associated. You are not authorized to perform this operation"The AllocateAddress API call is used to allocate an Elastic IP address to your AWS account. The AssociateAddress API call is used to associate an Elastic IP address to any of your resources.Make sure that the AWS Identity and Access Management (IAM) user or role using the command has the following required permission in the attached IAM policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:AllocateAddress", "ec2:AssociateAddress" ], "Resource": "*" } ]}When releasing an Elastic IP address from my account, I'm getting the error: "Elastic IP address could not be released. You do not have permission to access the specified resource."This error message occurs when you try to release or disassociate an Elastic IP address that's used by an AWS Managed Service. Examples of AWS Managed Services are Elastic Load Balancing (ELB), NAT Gateway, Amazon Elastic File System (Amazon EFS), and so on. To release an Elastic IP address associated with an AWS Managed Service, delete the resource that's using it. For example, if you have a NAT Gateway with an attached Elastic IP address, then you must first delete the NAT Gateway before you can release the Elastic IP address.Related informationElastic IP addressesFollow"
https://repost.aws/knowledge-center/ec2-troubleshoot-elastic-ip-addresses
How can I configure the password policy for my RDS for SQL Server instance?
I need to configure a password policy for my Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server instance. How do I do this?
"I need to configure a password policy for my Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server instance. How do I do this?Short descriptionPassword policy and expiration intervals are configured at the host level (OS, Microsoft Windows layer). Amazon RDS is a managed service. So, modifications to the password policy aren't possible due to restricted access to the operating system.The password policy is turned on by default in RDS for SQL Server when a new login or modified using SQL Server Management Studio (SSMS) or T-SQL.ResolutionThe RDS for SQL Server instance isn't joined to an Active DirectoryIf the instance isn't joined to an Active Directory, then the policies are defined on the Windows operating system. You can't modify these policies. The following are the values configured on the Windows password policy:Enforce password history: 0 passwords rememberedMinimum password length: 0 charactersPassword must meet complexity requirements: DisabledStore passwords using reversible encryption: DisabledMinimum password age: 0 daysMaximum password age: 42 daysNote: An account lock out policy isn't possible for RDS for SQL Server instances that aren't joined to an Active Directory. An account lockout policy requires access to the underlying OS. Amazon RDS is a managed service, so host-level access isn't available.The RDS for SQL Server instance is joined to an Active DirectoryYou can enforce and modify password policies if you're using Windows Authentication for RDS for SQL Server. This applies to Windows Authentication only. SQL Authentication uses the local password policy defined in the previous section whether or not it's joined to a domain.Run the following query to identify the SQL Server logins configured with the password policy and the password expiration on the instance:select name, type_desc, create_date, modify_date, is_policy_checked, is_expiration_checked, isnull(loginproperty(name,'DaysUntilExpiration'),'-') Days_to_Expire, is_disabledfrom sys.sql_loginsThe following are the available options for password policy enforcement and password expiration for SQL Server logins:Note: These options are only for RDS for SQL Server instances that are joined to a domain.The policy_checked column is 0: The SQL Server login does not do any password policy enforcement.The policy_checked column is 1 and is_expiration_checked is 0: The SQL Serve login enforces password complexity and lockout, but not password expiration.The policy_checked column and is_expiration_checked are both 1: The SQL Server login enforces password complexity, lockout, and password expiration.If policy_checked and is_expiration_checked are both 0, then the policy is for the primary user in your RDS for SQL Server DB instance. This indicates that password complexity, lockout setting, and password expiration aren't set for the primary user. So, the primary user doesn't expire in RDS for SQL Server. If your primary user loses access, you can reset the password. For more information, see Why did the master user for my RDS for SQL Server instance lose access and how can I gain it back?Follow"
https://repost.aws/knowledge-center/rds-sql-server-configure-password-policy
How do I configure my Amazon RDS for Oracle instance to send emails?
I want to configure my Amazon Relational Database Service (Amazon RDS) for Oracle DB instance to send emails.
"I want to configure my Amazon Relational Database Service (Amazon RDS) for Oracle DB instance to send emails.Short descriptionTo send an email from an Amazon RDS for Oracle instance, you can use UTL_MAIL or UTL_SMTP packages.To use UTL_MAIL with RDS for Oracle, you must add UTL_MAIL option in the non-default option group attached with the instance. For more information on configuring UTL_MAIL, see ORACLE UTL_MAIL.To use UTL_SMTP with RDS for Oracle, you must configure an SMTP server on an on-premises machine or an Amazon Elastic Compute Cloud (Amazon EC2) instance using Amazon Simple Email Service (Amazon SES). In this case, be sure that the connectivity from RDS for Oracle to the SMTP server is configured correctly.This article focuses on configuring the DB instance to send emails through the UTL_SMTP package using Amazon SES.As a prerequisite, be sure that the Amazon SES endpoint is accessible from the RDS instance. If your RDS instance runs in a private subnet, then you must add a NAT gateway in the route table of the subnet. This is required for the subnet to communicate with the Amazon SES endpoint. To check the route table of the subnet, open the Amazon VPC console, and choose Route Tables in the navigation pane.To configure your DB instance to send emails, do the following:Set up the SMTP mail server. In this article, Amazon SES is used for setting up the SMTP mail server.Create an Amazon EC2 instance. Then, configure the Oracle client and wallet using the appropriate certificate.Upload the wallet to an Amazon Simple Storage Service (Amazon S3) bucket.Download the wallet from the Amazon S3 bucket to the RDS server using S3 integration.Grant the required privileges to the user (if the user is a non-master user), and create the required access control lists (ACLs).Send the email using the Amazon SES credentials and the procedure provided in this article.ResolutionSet up the SMTP mail server using Amazon SESFor instructions, see How do I set up and connect to SMTP using Amazon SES?Create an Amazon EC2 instance and configure the Oracle client and wallet1.    Create an Amazon EC2 Linux instance.2.    Install the Oracle Client that preferably has the same version as that of the Amazon RDS instance. In this article, Oracle version 19c is used. You can download the Oracle 19c client, see Oracle Database 19c (19.3). This version also comes with the orapki utility.3.    Install AWS Command Line Interface (AWS CLI).4.    Allow connection on the database port in the RDS security group from the EC2 instance. If both instances use the same VPC, then allow the connection via their private IP addresses.5.    Connect to the EC2 instance.6.    Run the following command to download the AmazonRootCA1 certificate.wget https://www.amazontrust.com/repository/AmazonRootCA1.pem7.    Run the following commands to create the wallet:orapki wallet create -wallet . -auto_login_onlyorapki wallet add -wallet . -trusted_cert -cert AmazonRootCA1.pem -auto_login_onlyUpload the wallet to Amazon S31.    Run the following command to upload the wallet to an Amazon S3 bucket:Note: Be sure that the S3 bucket is in same Region as the RDS instance for S3 integration to work.aws s3 cp cwallet.sso s3://testbucket/2.    Run the following command to verify if the file is uploaded successfully:aws s3 ls testbucketDownload the wallet to the RDS server using S3 integration1.    Create an option group using the Amazon RDS console.2.    Add the S3_INTEGRATION option in the option group that you created. This is required to download the wallet file from Amazon S3 to the RDS instance.3.    Create an RDS for Oracle instance with the option group that you created.4.    Prepare for the S3 integration by creating an AWS Identity and Access Management (IAM) policy and role. For more information, see Prerequisites for Amazon RDS for Oracle integration with Amazon S3.5.    Run the following commands to download the wallet into RDS from the S3 bucket:SQL> exec rdsadmin.rdsadmin_util.create_directory('S3_WALLET');PL/SQL procedure successfully completed.SQL> SELECT OWNER,DIRECTORY_NAME,DIRECTORY_PATH FROM DBA_DIRECTORIES WHERE DIRECTORY_NAME='S3_WALLET';OWNER DIRECTORY_NAME DIRECTORY_PATH-------------------- ------------------------------ ----------------------------------------------------------------------SYS S3_WALLET /rdsdbdata/userdirs/01SQL> SELECTrdsadmin.rdsadmin_s3_tasks.download_from_s3(p_bucket_name => 'testbucket',p_directory_name => 'S3_WALLET',P_S3_PREFIX => 'cwallet.sso') AS TASK_ID FROM DUAL;TASK_ID--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------1625291989577-52SQL> SELECT filename FROM table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('S3_WALLET'));FILENAME--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------01/cwallet.ssoGrant the required privileges to the user and create the required ACLsNote: You need this step if you're using the non-master user for RDS for Oracle.Run the following command to grant the required privileges to the non-master user:begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'DBA_DIRECTORIES', p_grantee => 'example-username', p_privilege => 'SELECT');end;/Run the following commands to create the required ACLs:BEGINDBMS_NETWORK_ACL_ADMIN.CREATE_ACL (acl => 'ses_1.xml',description => 'AWS SES ACL 1',principal => 'TEST',is_grant => TRUE,privilege => 'connect');COMMIT;END;/BEGINDBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL (acl => 'ses_1.xml',host => 'example-host');COMMIT;END;/Send the emailRun the following procedure to send the email:Note: Be sure to replace the following values in the procedure:example-server with the name of your SMTP mail serverexample-sender-email with the sender email addressexample-receiver-email with the receiver email addressexample-SMTP-username with your user nameexample-SMTP-password with your passwordIf you're using on-premises or Amazon EC2 as the SMTP server, then be sure to use the information related to the on-premises or EC2 server instead of Amazon SES.declarel_smtp_server varchar2(1024) := 'example-server';l_smtp_port number := 587;l_wallet_dir varchar2(128) := 'S3_WALLET';l_from varchar2(128) := 'example-sender-email';l_to varchar2(128) := 'example-receiver-email';l_user varchar2(128) := 'example-SMTP-username';l_password varchar2(128) := 'example-SMTP-password';l_subject varchar2(128) := 'Test mail from RDS Oracle';l_wallet_path varchar2(4000);l_conn utl_smtp.connection;l_reply utl_smtp.reply;l_replies utl_smtp.replies;beginselect 'file:/' || directory_path into l_wallet_path from dba_directories where directory_name=l_wallet_dir;--open a connectionl_reply := utl_smtp.open_connection(host => l_smtp_server,port => l_smtp_port,c => l_conn,wallet_path => l_wallet_path,secure_connection_before_smtp => false);dbms_output.put_line('opened connection, received reply ' || l_reply.code || '/' || l_reply.text);--get supported configs from serverl_replies := utl_smtp.ehlo(l_conn, 'localhost');for r in 1..l_replies.count loopdbms_output.put_line('ehlo (server config) : ' || l_replies(r).code || '/' || l_replies(r).text);end loop;--STARTTLSl_reply := utl_smtp.starttls(l_conn);dbms_output.put_line('starttls, received reply ' || l_reply.code || '/' || l_reply.text);--l_replies := utl_smtp.ehlo(l_conn, 'localhost');for r in 1..l_replies.count loopdbms_output.put_line('ehlo (server config) : ' || l_replies(r).code || '/' || l_replies(r).text);end loop;utl_smtp.auth(l_conn, l_user, l_password, utl_smtp.all_schemes);utl_smtp.mail(l_conn, l_from);utl_smtp.rcpt(l_conn, l_to);utl_smtp.open_data (l_conn);utl_smtp.write_data(l_conn, 'Date: ' || to_char(SYSDATE, 'DD-MON-YYYY HH24:MI:SS') || utl_tcp.crlf);utl_smtp.write_data(l_conn, 'From: ' || l_from || utl_tcp.crlf);utl_smtp.write_data(l_conn, 'To: ' || l_to || utl_tcp.crlf);utl_smtp.write_data(l_conn, 'Subject: ' || l_subject || utl_tcp.crlf);utl_smtp.write_data(l_conn, '' || utl_tcp.crlf);utl_smtp.write_data(l_conn, 'Test message.' || utl_tcp.crlf);utl_smtp.close_data(l_conn);l_reply := utl_smtp.quit(l_conn);exceptionwhen others thenutl_smtp.quit(l_conn);raise;end;/Troubleshooting errorsORA-29279: If your SMTP user name or password is inaccurate, then you might get the following errorORA-29279: SMTP permanent error: 535 Authentication Credentials InvalidTo resolve this issue, verify that your SMTP credentials are accurate.ORA-00942: If the email package is run by a non-master user, then you might get the following error:PL/SQL: ORA-00942: table or view does not existTo resolve this issue, grant the required permissions to the user by running the following procedure:begin rdsadmin.rdsadmin_util.grant_sys_object( p_obj_name => 'DBA_DIRECTORIES', p_grantee => 'example-username', p_privilege => 'SELECT');end;/ORA-24247: If either an ACL isn't assigned to the target host or the user doesn't have the required privileges to access the target host, then you might get the following error:ORA-24247: network access denied by access control list (ACL)To resolve this issue, create an ACL and assign the ACL to the host by running the following procedure:BEGINDBMS_NETWORK_ACL_ADMIN.CREATE_ACL (acl => 'ses_1.xml',description => 'AWS SES ACL 1',principal => 'TEST',is_grant => TRUE,privilege => 'connect');COMMIT;END;/BEGINDBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL (acl => 'ses_1.xml',host => 'example-host');COMMIT;END;/Related informationOracle documentation for Overview of the email delivery serviceFollow"
https://repost.aws/knowledge-center/rds-oracle-send-emails
How do I resolve change set errors in CloudFormation?
I receive an error when I try to import resources into an AWS CloudFormation stack.
"I receive an error when I try to import resources into an AWS CloudFormation stack.Short descriptionBased on the type of error that you receive, complete the steps in the related section of this article.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.ResolutionTroubleshoot the outputs error"There was an error creating this change set. As part of the import operation, you cannot modify or add [Outputs]"This error occurs when importing a resource into a stack. It also occurs when creating a new stack with a resource import template that has outputs through the CloudFormation console. Try these troubleshooting steps:1.    Compare the Outputs section of the latest CloudFormation template with the template that your stack is currently using. The Outputs sections in both templates must be the same. If the values aren't the same, then update the latest template to match the values and outputs in the Outputs section of the current template.Important: The import operation can't contain additions and modifications to Logical ID, Description, Value, Export, and other properties in Outputs.2.    After the import operation completes, update the stack with the desired changes in the Outputs configuration.Troubleshoot the validation error with stack attributes"An error occurred (ValidationError) when calling the CreateChangeSet operation: As part of the import operation, you cannot modify or add [Tags]"This error occurs when you use the AWS CLI or AWS SDK to create an IMPORT type change set that contains modified or added stack attributes.Try these troubleshooting steps:1.    Confirm that the stack attributes that are included for the change set creation operation are in sync with the current attribute values of the stack.Important: Don't update or add any new attribute values.2.    After the resources are imported, update your attributes in a separate update operation.Troubleshoot the modified resource error"There was an error creating this change set. You have modified resources [ResourceName] in your template that are not being imported. Update, create or delete operations cannot be executed during import operations."This error occurs when you modify an existing resource during a resource import operation. During an import operation, you can't create, update, and delete a resource.Try these troubleshooting steps: 1.    Create an UPDATE type change set instead of an IMPORT type change set. This shows you the source of the change in the resource.2.    Use the same Resources specification for the existing resources, and add only the appropriate resources to import to the template.Troubleshoot the resources to import list errorThe following errors commonly occur when you use the AWS CLI or AWS SDK to create an IMPORT type change set."An error occurred (ValidationError) when calling the CreateChangeSet operation: Resources [<ResourceName>] is missing from ResourceToImport list"If you receive the preceding error, then try the following troubleshooting step:In your CloudFormation template, verify that you're passing a physical ID into the ResourceToImport property for all resources that you want to import to the stack."An error occurred (ValidationError) when calling the CreateChangeSet operation: Must Provide at least one resource to import"If you receive the preceding error, then try the following troubleshooting step:Verify that you're including --resources-to-import in your AWS CLI command or ResourceToImport in your API call. Also, be sure to list all the resources to import.Important: You must pass a Physical ID to all new resources for importing.Related informationCreating a stack from existing resourcesImporting existing resources into a stackResources that support import and drift detection operationsBringing existing resources into CloudFormation managementFollow"
https://repost.aws/knowledge-center/cloudformation-change-set-errors
Why am I seeing an Emergent Snapshot or my snapshot running after my backup window is closed for my RDS for SQL Server instance?
Why am I seeing an Emergent Snapshot or my snapshot running after my backup window is closed for my Amazon Relational Database Service (RDS) for SQL Server instance?
"Why am I seeing an Emergent Snapshot or my snapshot running after my backup window is closed for my Amazon Relational Database Service (RDS) for SQL Server instance?Short descriptionAn Emergent Snapshot is an automatic as-needed backup taken by Amazon RDS due to the following:Restoring or creating a new database with SIMPLE recovery model.Modifying recovery model from FULL to SIMPLE/bulk-logged in both single and Multi-Availability Zone (AZ) instances.For Point in Time Recovery (PiTR), RDS uploads transaction log backups every five minutes for DB instances to Amazon Simple Storage Service (Amazon S3). When RDS doesn't take transactional log backups successfully, an Emergent Snapshot is triggered by RDS to mitigate problems during PiTR.After instance patching is complete, RDS triggers an Emergent Snapshot to safeguard the instance.You can back up your Amazon RDS instances using one of the following methods:Manually back up your DB instance by creating a DB snapshot. For more information, see creating a DB snapshot.Automatically back up your DB instance by making sure automated backups are turned on. Amazon RDS creates and saves automated backups during the backup window of your DB instance.When manually or automatically backing up your DB instance, an event "Backing up DB instance" is logged in RDS Events. Automated backups occur daily during the preferred backup window. Also, observing an event "Emergent Snapshot Request: Databases found to still be awaiting snapshot" in RDS events creates an automatic ad-hoc backup. This automatic ad-hoc backup occurs outside the instance backup window.Note: An Emergent Snapshot is normal and is an expected behavior.ResolutionTo identify the reason for Emergent Snapshot, review the SQL Server engine logs:Open the Amazon RDS console.In the navigation pane, choose Databases.Choose the name of the DB instance that has the log file that you want to view.Choose the Logs & events tab.Scroll down to the Logs section.(Optional) Enter a search term to filter your results.Choose the log that you want to view, then choose View.Review the Amazon RDS for SQL Server logs, which are logged immediately before the Emergent Snapshot, to identify messages similar to the following:BACKUP failed to complete the command BACKUP LOG Test_Database. Check the backup application log for detailed messages.Setting database option RECOVERY to SIMPLE for database 'Test_Database'Restore is complete on database 'Test_Database'. The database is now available.Starting up database 'Test_Database'.The Amazon RDS for SQL server logs indicate log backup failures and changes of a database recovery model to SIMPLE. They also indicate new databases restored on an instance or new databases created.To identify instances that have been patched, review RDS Events to look for an event similar to "Applying off-line patches to DB instance".Follow"
https://repost.aws/knowledge-center/rds-sql-server-emergent-snapshot-backup
How do I allow requests from a bot blocked by AWS WAF Bot Control managed rule group?
I want to allow requests from a bot that has been blocked by AWS WAF Bot Control rule group. How do I allow requests from a legitimate bot?
"I want to allow requests from a bot that has been blocked by AWS WAF Bot Control rule group. How do I allow requests from a legitimate bot?Short descriptionTo allow requests from a bot blocked by AWS WAF Bot Control rule group, do the following:Identify the Bot Control rule that's blocking the requests from AWS WAF logs by Querying AWS WAF logs.Set the Bot Control rule that's blocking the requests to count.Create a custom rule to match against the excluded rule's label and to block all matching requests except for the bot that you want to allow.Validate that the bot traffic is allowed.The Bot Control managed rule group verifies bots using the IP addresses from AWS WAF. If you have verified bots that route through a proxy or a CDN that doesn't preserve the client IP address while forwarding the requests, then you must specifically allow the bot.ResolutionIdentify the Bot Control rule that's blocking the requestsAnalyze the AWS WAF logs to identify the Bot Control rule that's blocking requests from the required bot.1.    To analyze AWS WAF logs using Amazon Athena, create a table for AWS WAF logs in Athena using partition projection. For instructions, see Creating the table for AWS WAF logs in Athena using partition projection.2.    Run the following Athena query to find the details of the request blocked by the Bot Control rule group:Note: Replacewaf_logs with your table name. The time intervaltime > now() - interval '3' day can be replaced with your specified time interval.WITH waf_data AS (SELECT from_unixtime(waf.timestamp / 1000) as time, waf.terminatingRuleId, waf.action, waf.httprequest.clientip as clientip, waf.httprequest.requestid as requestid, waf.httprequest.country as country, rulegroup.terminatingrule.ruleid as matchedRule,labels as Labels, map_agg(LOWER(f.name), f.value) AS kv FROM waf_logs waf, UNNEST(waf.httprequest.headers)AS t(f), UNNEST(waf.rulegrouplist) AS t(rulegroup) WHERE rulegroup.terminatingrule.ruleid IS NOT NULL GROUP BY 1, 2, 3, 4, 5, 6, 7,8)SELECT waf_data.time, waf_data.action, waf_data.terminatingRuleId, waf_data.matchedRule, waf_data.kv['user-agent'] as UserAgent,waf_data.clientip, waf_data.country, waf_data.LabelsFROM waf_dataWhere terminatingRuleId='AWS-AWSManagedRulesBotControlRuleSet' and time > now() - interval '3' dayORDER BY timeDESCFor sample Amazon Athena queries to filter records for a specified time range, see Example queries for AWS WAF logs.3.    (Optional) To further narrow down your search, add an additional filter on UserAgent using the AND operator in the Where clause. For a description of the fields in WAF logs, see Log Fields. For example, you can add the filter kv['user-agent'] like 'Postman%' to narrow your results.4.    Check the matchedRule column to identify the rule which is blocking the requests. Note: For additional information on Bot Control rules, see AWS WAF Bot Control rule group.Set the Bot Control rule that's blocking the requests to countEdit the Bot Control Rule group to set the rule that's blocking the requests to count. To set a rule to count, see Setting rule actions to count in a rule group. This allows the rule to apply its label to matching requests and to allow the bot that isn't blocked.Create a custom rule to match against the excluded rule's label and to block all matching requests except for the bot that you want to allowAdd a label matching rule to your web ACL based on the rule label that is blocking the request. The label matching rule must come after the Bot Control managed rule group. For information on Bot Control managed rule group labels, see AWS WAF Bot Control rule group.If a rule with the category label is blocking the requestConfigure your custom rule to allow a specific blocked bot. Important: Replace the bot category and bot name labels in the rule configuration with the bot category and bot name labels from the Athena query results.For all other rule labelsCreate a custom rule to Create an exception for a blocked user agent.Important: Replace the bot signal label and the UserAgent value in the field SearchString in the rule configuration with the bot signal label and UserAgent value from the labels and UserAgent columns of Athena query results.Validate that the bot traffic is allowedCheck the AWS WAF logs again to verify that the bot is now being allowed. If the bot is still blocked, repeat the preceding process to identify additional rules that are blocking the requests.Related informationFalse positives with AWS WAF Bot ControlAWS WAF Bot Control examplesFollow"
https://repost.aws/knowledge-center/waf-allow-blocked-bot-rule-group
How can I mount an Amazon EFS volume to an instance in my Elastic Beanstalk environment?
I want to mount an Amazon Elastic File System (Amazon EFS) volume to an Amazon Elastic Compute Cloud (Amazon EC2) instance in my AWS Elastic Beanstalk environment.
"I want to mount an Amazon Elastic File System (Amazon EFS) volume to an Amazon Elastic Compute Cloud (Amazon EC2) instance in my AWS Elastic Beanstalk environment.Short descriptionIn an Elastic Beanstalk environment, you can use Amazon EFS to create a shared directory that stores files that are uploaded or modified by your application's users. Your application can treat a mounted Amazon EFS volume as local storage. Therefore, you don't have to change your application code to scale up to multiple instances.To mount an Amazon EFS volume to an Amazon EC2 instance in your Elastic Beanstalk environment, you must include configuration files in your source code.Resolution1.    Create an Amazon EFS file system, and then note the Amazon EFS ID and security group ID.2.    To allow connections, edit the security group rules for the file system. The rules must allow inbound connections on port 2049 (Network File System or NFS) from the security group for instances in your Elastic Beanstalk environment.3.    Update the instance security group to allow outbound connections on port 2049 to the Amazon EFS security group.Note: The Amazon EFS security group must allow inbound connections when you mount the mount targets of one subnet on an environment in a different subnet. That is, it must allow inbound connections on port 2049 from your Amazon Virtual Private Cloud (Amazon VPC) CIDR.4.    In the root of your application bundle, create a directory named .ebextensions.5.    Add a formatted configuration file (YAML or JSON) to your directory.Important: Add the file system ID in the configuration file. Replace FILE_SYSTEM_ID: {"Ref" : "FileSystem"} with FILE_SYSTEM_ID: fs-xxxxxxxx. The configuration file includes a script that mounts the Amazon EFS file system to the instance during deployment.6.    Deploy the source code that includes the configuration file from step 5 to your Elastic Beanstalk application.7.    To confirm that your Amazon EFS volume is mounted to your instance on your specified mount path, run the following command:df -HRelated informationMounting EFS file systemsUsing Elastic Beanstalk with Amazon Elastic File SystemSecurity in Amazon EFSFollow"
https://repost.aws/knowledge-center/elastic-beanstalk-mount-efs-volumes
How do I set up the AWS CLI so that I can work with an Amazon DynamoDB table on Amazon EC2?
I want to configure the AWS Command Line Interface (AWS CLI) to work with Amazon DynamoDB tables on Amazon Elastic Compute Cloud (Amazon EC2).
"I want to configure the AWS Command Line Interface (AWS CLI) to work with Amazon DynamoDB tables on Amazon Elastic Compute Cloud (Amazon EC2).ResolutionCreate an AWS Identity and Access Management (IAM) roleTo create an IAM role, do the following:For Select type of trusted entity, choose AWS service, and then choose EC2.For Attach permissions policies, choose AmazonDynamoDBFullAccess.Note: Follow the security best practice of granting least privilege to perform a task.Attach the IAM role to an Amazon EC2 instance1.    Launch an EC2 instance using an Amazon Linux Amazon Machine Image (AMI). Linux AMIs come with the AWS CLI installed.2.    On the Configure Instance Details page, in the IAM role drop-down list, select the IAM role that you created earlier. Be sure that the subnet that you select is accessible from the internet.3.    On the Configure Security Group page, be sure that you select a security group that allows SSH access from your IP address.Connect to the instance using SSH1.    Connect to your Linux instance using SSH.2.    After you're connected, run the yum update command to be sure that the software packages on the instance are up to date.Configure the AWS CLI1.    Run the aws configure command.2.    When prompted for an AWS Access Key ID and AWS Secret Access Key, press Enter. You don't need to provide keys because you're using an instance IAM role to connect with an AWS service.3.    When prompted for Default region name, enter the Region where your DynamoDB tables are located. For example, ap-northeast-3. For a list of Region names, see Service endpoints.4.    When prompted for Default output format, press Enter.5.    Run the list-tables command to confirm that you can run DynamoDB commands on the AWS CLI.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Related informationUsing the AWS CLI with DynamoDBFollow"
https://repost.aws/knowledge-center/prepare-environment-aws-cli-dynamodb-ec2
How do I troubleshoot Lambda function SQS ReportBatchItemFailures?
"I set up Partial Batch Response for my AWS Lambda function that has Amazon Simple Queue Service (Amazon SQS) configured as an event source. Now, my Lambda function is returning a list of "ReportBatchItemFailures" and one of the following occurs: Lambda retries an entire message batch when there wasn't a function error, or Lambda doesn't retry any of the partial message batches. How do I troubleshoot the issue?"
"I set up Partial Batch Response for my AWS Lambda function that has Amazon Simple Queue Service (Amazon SQS) configured as an event source. Now, my Lambda function is returning a list of "ReportBatchItemFailures" and one of the following occurs: Lambda retries an entire message batch when there wasn't a function error, or Lambda doesn't retry any of the partial message batches. How do I troubleshoot the issue?ResolutionNote: You must manually configure Partial Batch Response on your Lambda function to programmatically process partial Amazon SQS batches. For more information, see Reporting batch item failures in the AWS Lambda Developer Guide.Example AWS Command Line Interface (AWS CLI) command to activate Partial Batch Response for a Lambda functionImportant: Replace <esm_UUID> with your Amazon SQS event source mapping's universally unique identifier (UUID). To retrieve your event source mapping's UUID, run the list-event-source-mappings AWS CLI command. If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.aws lambda update-event-source-mapping --uuid <esm_UUID> --function-response-types "ReportBatchItemFailures"To troubleshoot ReportBatchItemFailures where Lambda retries an entire SQS message batch when there wasn't a function errorReview the Partial Batch Response in your Lambda function's code to see if there are any of the following responses. Then, remediate the issue based on the response that's recorded.Note: Lambda treats a batch as a complete failure if your function returns any of the following responses.Responses for an EventResponse that isn't valid JSONreturn "Hello world"return ""Response for an empty itemIdentifier valuereturn {"batchItemFailures":[{"itemIdentifier": ""}]}Response for a null itemIdentifier valuereturn {"batchItemFailures":[{"itemIdentifier": None}]}Response for an itemIdentifier value with an incorrect key namereturn {"batchItemFailures":[{"bad_key": messageID}]}Response for an itemIdentifier value with a message ID that doesn't existreturn {"batchItemFailures":[{"itemIdentifier": "random_ID"}]}Important: Your Lambda function must return a valid itemIdentifier JSON value.To troubleshoot ReportBatchItemFailures where Lambda doesn't retry any of the partial message batchesReview the Partial Batch Response in your Lambda function's code to see if there are any of the following responses. Then, remediate the issue based on the response that's recorded.Note: Lambda treats a batch as a complete success when your function returns any of the following responses.Response for an empty batchItemFailures listreturn {"batchItemFailures":[]}Response for a null batchItemFailures listreturn {"batchItemFailures": None}Responses for an EventResponse with an empty or unexpected JSON valuereturn {}return {"Key1":"Value1"}Responses for a null EventResponsereturnreturn NoneImportant: Your Lambda function must return a response that contains a "batchItemFailures" JSON value that includes a list of valid message IDs.Related informationHow can I prevent an Amazon SQS message from invoking my Lambda function more than once?Follow"
https://repost.aws/knowledge-center/lambda-sqs-report-batch-item-failures
How do I change the VPC for an Amazon RDS DB instance?
How can I move my Amazon Relational Database Service (Amazon RDS) DB instance from an existing Amazon Virtual Private Cloud (Amazon VPC) to a new VPC?
"How can I move my Amazon Relational Database Service (Amazon RDS) DB instance from an existing Amazon Virtual Private Cloud (Amazon VPC) to a new VPC?Short descriptionTo move an Amazon RDS DB instance to a new VPC, you must change its subnet group. Before you move the RDS DB instance to a new network, configure the new VPC. This configuration includes the security group inbound rules, the subnet group, and the route tables. When you change the VPC for a DB instance, the instance reboots when it moves from one network to another. Because the DB instance isn't accessible while it's being moved, change the VPC during a planned change window that is outside the RDS weekly maintenance window.You can't change the VPC for a DB instance if:The DB instance is in multiple Availability Zones (AZs). Convert the DB instance to a single AZ, and then convert it back to a Multi-AZ DB instance after moving to the new VPC. For more information about converting instances, see High availability (Multi-AZ) for Amazon RDS. Note: You can't change a DB subnet group to a Multi-AZ configuration. By default, the Amazon Aurora storage is Multi-AZ—even for a single instance—so you can't modify the VPC for Amazon Aurora. For more information, see How can I change the VPC of an Amazon Aurora for MySQL or PostgreSQL cluster?The DB instance is a read replica or has read replicas. Remove the read replicas, and then add read replicas after the DB instance is moved to the new VPC.The subnet group created in the target VPC doesn't have subnets from the AZ where the source database is running. If the AZs are different, then the operation fails.ResolutionOpen the Amazon RDS console.From the navigation pane, choose Subnet Groups from the navigation pane.Choose Create DB Subnet Group.Enter the subnet name, description, and VPC ID, and then choose the subnets needed for the DB instance.Choose Create.From the navigation pane, choose Databases.Select the DB instance, and then choose Modify.From the Connectivity section, select the Subnet Group associated with the new VPC. Then, choose the appropriate Security Group for that VPC.Choose Continue, and then choose Apply Immediately.Note: If you don't choose Apply Immediately, then Amazon RDS modifies the VPC during the next maintenance window.Review the details on the Modify DB Instance page, and then choose Modify DB Instance.This task can take several minutes to complete. You can confirm that the subnet is changed by selecting the instance and then navigating to the configuration details page. This shows that the subnet group is updated and the status is Complete. You can also open the RDS console and then choose Events in the left navigation pane. Confirm that the process moved the DB instance to the target VPC.Related informationWorking with a DB instance in a VPCVPCs and subnetsFollow"
https://repost.aws/knowledge-center/change-vpc-rds-db-instance
How can I turn off TLS 1.0 or TLS 1.1 in my Lightsail instance?
How can I turn off TLS 1.0 or TLS 1.1 in my Amazon Lightsail instance?
"How can I turn off TLS 1.0 or TLS 1.1 in my Amazon Lightsail instance?Short descriptionAll versions of the SSL/TLS protocol prior to TLS 1.2 are no longer updated and considered insecure. Most web servers still have these TLS versions turned on by default. You can turn these protocols off by modifying the SSLProtocol directive in the web server configuration files. The following resolution covers turning off these non-updated TLS versions in Lightsail instances for Apache and NGINX web servers.Note: If you're using Amazon Lightsail load balancer for your website, then you must also turn off TLS version 1.0 and 1.1 in the load balancer. However, turning off TLS versions in Lightsail load balancer isn't currently supported. To turn off these TLS versions and also use the Lightsail load balancer, use an Amazon Application Load Balancer instead of a Lightsail load balancer.ResolutionNote: The file paths mentioned in this article might change depending on the following:The instance has a Bitnami stack and the Bitnami stack uses native Linux system packages (Approach A).The instance has a Bitnami stack and it's a self-contained installation (Approach B).If you're using a Lightsail instance with a Bitnami stack, run the following command to identify your Bitnami installation type:test ! -f "/opt/bitnami/common/bin/openssl" && echo "Approach A: Using system packages." || echo "Approach B: Self-contained installation."Lightsail instances with a Bitnami stackApache web service1.    Open the configuration file:Bitnami stack under Approach Asudo vi /opt/bitnami/apache2/conf/bitnami/bitnami-ssl.confBitnami stack under Approach Bsudo vi /opt/bitnami/apache2/conf/bitnami/bitnami.conf2.    In the configuration file, modify the SSLProtocol directive to reflect the TLS version that you want to use. In the following example, the TLS version is 1.2 and 1.3:SSLProtocol +TLSv1.2 +TLSv1.3Note: Use TLSv1.3 only if you have OpenSSL version 1.1.1 in your server. You can verify the version by running the command openssl version.3.    Save the file by pressing esc, typing :wq! and then pressing ENTER.4.    Restart the Apache service:sudo /opt/bitnami/ctlscript.sh restart apacheNGINX web service1.    Open the configuration file:sudo vi /opt/bitnami/nginx/conf/nginx.conf2.    In the configuration file, modify the SSLProtocol directive to reflect the TLS version that you want to use. In the following example, the TLS version is 1.2 and 1.3:ssl_protocols TLSv1.2 TLSv1.3;Note: Use TLSv1.3 only if you have OpenSSL version 1.1.1 in your server. You can verify the version by running the command openssl version.3.    Save the file by pressing esc, typing :wq! and then pressing ENTER.4.    Restart the Apache service:sudo /opt/bitnami/ctlscript.sh restart nginxLightsail instances without a Bitnami stackApache web service1.    Open the configuration file:For Linux distributions such as Amazon Linux 2 and CentOSsudo vi /etc/httpd/conf.d/ssl.confFor Linux distributions such as Ubuntu and Debiansudo vi /etc/apache2/mods-enabled/ssl.conf2.    In the configuration file, modify the SSLProtocol directive to reflect the TLS version that you want to use. In the following example, the TLS version is 1.2 and 1.3.SSLProtocol +TLSv1.2 +TLSv1.3Note: Use TLSv1.3 only if you have OpenSSL version 1.1.1 in your server. You can verify the version by running the command openssl version.3.    Save the file by pressing esc, typing :wq! and then pressing ENTER.4.    Restart the Apache service:For Linux distributions such as Amazon Linux 2 and CentOSsudo systemctl restart httpdFor Linux distributions such as Ubuntu and Debiansudo systemctl restart apache2NGINX web service1.    Open the configuration file:sudo vi /etc/nginx/nginx.conf2.    In the configuration file, modify the SSLProtocol directive to reflect the TLS version that you want to use. In the following example, the TLS version is 1.2 and 1.3.ssl_protocols TLSv1.2 TLSv1.3;Note: Use TLSv1.3 only if you have OpenSSL version 1.1.1 in your server. You can verify the version by running the command openssl version.3.    Save the file by pressing esc, typing :wq! and then pressing ENTER.4.    Restart the Apache service:sudo systemctl restart nginxFollow"
https://repost.aws/knowledge-center/lightsail-turn-off-tls
How can I troubleshoot storage consumption in my RDS for SQL Server DB instance?
"My Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server DB instance is using more space than expected. Why is this happening, and how can I optimize disk storage?"
"My Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server DB instance is using more space than expected. Why is this happening, and how can I optimize disk storage?Short descriptionYou can monitor available storage space for a DB instance using the FreeStorageSpace metric in Amazon CloudWatch. Frequently monitoring this metric and turning on storage auto-scaling helps prevent instances from running out of storage (Storage Full state).However, the FreeStorageSpace metric doesn't describe how the SQL Server engine is consuming available storage.ResolutionAmazon RDS for SQL Server instances in the Storage Full stateYou can't perform basic operations when your RDS instance is stuck in the Storage Full state. For more information, see How do I resolve problems that occur when Amazon RDS DB instances run out of storage?Some RDS for SQL Server DB instances have limitations for modifying storage. In the Amazon RDS console, the Allocated storage option is deactivated if your DB instance isn’t eligible to be modified. To scale storage on an instance when the modify option isn't available, migrate your data using native backup and restore to a new instance. Make sure that the new instance has Provisioned IOPS or has the General Purpose (SSD) storage type. Or, use a data migration tool to migrate to the new instance. For more information, see Modifying an Amazon RDS DB instance.Use the following AWS Command Line Interface (AWS CLI) command to return the valid storage options for your DB instance:describe-valid-db-instance-modificationsNote: Scale storage and storage autoscaling aren't supported in RDS for SQL Server instances that use magnetic storage.For instances that have storage autoscaling turned on, storage is extended only in certain scenarios. For more information, see Managing capacity automatically with Amazon RDS storage autoscaling. In addition, storage is extended only if the maximum storage threshold doesn't equal or exceed the storage increment. For more information, see Limitations.Storage consumption for RDS for SQL Server instancesTo gather detailed information about the physical disk space usage for a SQL Server DB instance, run a query similar to the following:SELECT D.name AS [database_name] , F.name AS [file_name] , F.type_desc AS [file_type] , CONVERT(decimal(10,2), F.size * 0.0078125) AS [size_on_disk_mb] , CONVERT(decimal(10,2), F.max_size * 0.0078125) AS [max_size_mb]FROM sys.master_files AS FINNER JOIN sys.databases AS D ON F.database_id = D.database_id;Files containing ROWS comprise data, and files containing LOGS represent in-flight transactions.Note: The sys.master_files system view shows the startup size of tempdb. It doesn't reflect the current size of the tempdb. Run the following query to check the current size of tempdb:select name AS [database_name], physical_name AS [file_name], convert(decimal(10,2),size*0.0078125) AS [size_on_disk_mb]from tempdb.sys.database_files;Before you optimize storage, be sure that you understand how the SQL Server engine uses storage. SQL Server engine storage is broadly defined using the following categories:Database filesYou can break down the total storage used by an individual database into row, index, and free space in the currently active database. To do this, run a query similar to the following:EXEC sp_spaceused;Transaction log filesTo determine the amount of storage used by transaction logs, run the following query:DBCC SQLPERF(LOGSPACE)You can expect free space in the transaction logs, but you can de-allocate excessive free space by following the Microsoft documentation for DBCC SHRINKFILE.You can reduce the excessive allocation of free space for transaction logs by using the ALTER DATABASE (transact-SQL) file and filegroup options. The options configure the auto-growth settings for the database.Temporary database (tempdb)The SQL Server tempdb grows automatically. If the tempdb is consuming a large amount of available storage, you can shrink the tempdb database.Note: If you shrink a tempdb database, check the Message tab in SQL Server Management Studio (SSMS) for error messages after running the command. If you receive a DBCC SHRINKFILE: Page could not be moved because it is a work table page error message, then see the Microsoft documentation for DBCC FREESYSTEMCACHE and DBCC FREEPROCCACHE. You can also reboot the DB instance to clear the tempdb.DB instance's in a Storage Full status might not be able to  reboot. If this occurs, increase the allocated storage for your DB instance and then reboot. For more information, see How do I resolve problems that occur when Amazon RDS DB instances run out of storage?Database indexesIf you're dedicating a significant portion of your available storage to indexes, then you might be able to conserve some space through index tuning. You can gather detailed information about index usage by running the sys.dm_db_index_usage_stats dynamic management view. This can help you evaluate tuning priorities.Trace filesTrace files, including C2 Audit Trace files and dump files, can consume a lot of disk space. Amazon RDS automatically deletes trace and dump files older than 7 days, but you can also adjust the retention settings for your trace files. For more information, see Setting the retention period for trace and dump files.Space consumed by Amazon S3 integrationIf you integrated your RDS DB instance with Amazon S3, you might have uploaded files to your D: drive that are taking up space. To check how much space is being consumed by your S3 integration, run a command to list the files on your DB instance. For more information, see Listing files on the RDS DB instance.CDCFor databased that have CDC turned on, log file size increases depending on the frequency of changes to the source tables or databases. Storage might eventually run out. If the log disk becomes full, then CDC can't process further transactions.AuditingIf auditing isn't configured correctly for instance, the logs might grow exponentially and affect storage. For more information, see Using SQL Server Audit.C2 audit mode saves a large amount of event information to the log file. The log file might grow quickly and put the instance into the Storage Full state. For more information, see C2 audit mode server configuration option in the Microsoft documentation.In addition, turning on features such as query store might also impact resource utilization.Related informationAmazon RDS for Microsoft SQL ServerMonitoring metrics in an Amazon RDS instanceAmazon RDS DB instance running out of storageMigrating Microsoft SQL Server databases to the AWS CloudFollow"
https://repost.aws/knowledge-center/rds-sql-server-storage-optimization
How do I troubleshoot the connection between Transit Gateway and third-party virtual appliances running in a VPC?
"I have an AWS Transit Gateway Connect attachment to establish connectivity between Transit Gateway and SD-WAN (Software-defined Wide Area Network) instances in my virtual private cloud (VPC). However, I am unable to connect my remote network from the VPC over the Transit Gateway Connect attachment. How can I troubleshoot this?"
"I have an AWS Transit Gateway Connect attachment to establish connectivity between Transit Gateway and SD-WAN (Software-defined Wide Area Network) instances in my virtual private cloud (VPC). However, I am unable to connect my remote network from the VPC over the Transit Gateway Connect attachment. How can I troubleshoot this?Short descriptionTo troubleshoot connectivity between the source and remote networks connected by a Transit Gateway Connect attachment, check the following:Connect attachment setupAvailability ZonesRoute tablesNetwork security settingsResolutionTroubleshoot Transit Gateway and Connect attachment setupConfirm the Transit Gateway and Connect attachment setup configurationOpen the Amazon Virtual Private Cloud (Amazon VPC) console.From the navigation pane, choose Transit gateway attachments.Select the source VPC attachment where you have resources that need to communicate with remote or on-premises hosts. Verify that this attachment is associated with the correct Transit Gateway ID.Repeat step 3 for the Connect attachment, which is the attachment used to establish the connection between the transit gateway and the third-party virtual appliance running in your VPC.Repeat step 3 for the transport VPC attachment, which is the attachment used as the transport mechanism to establish the Generic Routing Encapsulation (GRE) setup between your transit gateway and SD-WAN.From the navigation pane, choose Transit gateway Route Tables.Select the Transit Gateway Route Table for each time of attachment and confirm that:The source and SD-WAN VPCs are attached to a transit gateway. This can be same or different transit gateway or Region.The source and SD-WAN VPC attachments are associated with the correct transit gateway route table.The Connect attachment is attached to correct transit gateway.The Connect attachment uses the correct VPC Transport Attachment (the VPC attachment of the SD-WAN appliance) and is in an Available state.Confirm that the Connect peers are configured correctlyOpen the Amazon VPC console.From the navigation pane, choose Transit gateway attachments.Select the connect attachment.Choose Connect Peers. Verify that:The peer GRE address is the private IP address of the SD-WAN instance that you want to create the GRE tunnel to.The Transit Gateway GRE address is one of the available IP addresses from the Transit Gateway CIDR.The BGP inside IPs are part of a /29 CIDR block from the 169.254.0.0/16 range for IPv4. Optionally, you can specify a /125 CIDR block from the fd00::/8 range for IPv6. See Transit Gateway Connect peers for a list of CIDR blocks that are reserved and can't be used.Confirm your third-party appliance configurationVerify that your third-party appliance configuration matches all requirements and considerations. If your appliance has more than one interface, make sure that OS routing is configured to send GRE packets out on the correct interface.Confirm that there is Transit Gateway Attachment in same Availability Zone as the SD-WAN applianceOpen the Amazon VPC console.From the navigation pane, choose Subnets.Select the subnets used by the VPC attachment and SD-WAN instance.Verify that the Availability Zone ID of both the subnets are the same.Troubleshoot route tables and routingConfirm the VPC route table for the source instance and SD-WAN instanceOpen the Amazon VPC console.From the navigation pane, choose Route tables.Select the route table used by the instance.Choose the Routes tab.Verify that there's a route with the correct Destination CIDR block and with the Target as Transit Gateway ID. For the source instance, the Destination CIDR block is the Remote Network CIDR. For the SD-WAN instance, the Destination CIDR block is the Transit Gateway CIDR blockConfirm the Transit Gateway attachment and source VPC attachment’s routing tablesOpen the Amazon VPC console.Choose Transit gateway route tables.Confirm that the source VPC attachment's associated route table has a route propagating from the Connect attachment for the remote network.Confirm that the Transit Gateway Connect attachment's associated route table has a route for the source VPC and SD-WAN Appliance's VPC.Troubleshoot Network securityConfirm that the Network ACLs allow trafficOpen the Amazon VPC console.From the navigation pane, choose Subnets.Select the Subnets used by the VPC attachment and SD-Wan Instance.Choose the Network ACL tab. Verify that:The SD-WAN instance's Network ACL allows GRE traffic.The Source instance's Network ACL allows traffic.The network ACL associated with the transit gateway network interface allows traffic.Confirm that the source and SD-WAN EC2 instance's security group allows trafficOpen the Amazon EC2 console.From the navigation pane, choose Instances.Select the appropriate instances.Choose the Security tab.Confirm that the SD-WAN instance's security group allows GRE traffic either in inbound rules to accept GRE Initiations, or in Outbound rule to initiate GRE Session. Confirm that the Source instance's security group allows the traffic.Follow"
https://repost.aws/knowledge-center/troubleshoot-transit-gateway-third-party
How can I access other AWS services from my Amazon ECS tasks on Fargate?
I want to access other AWS services from my Amazon Elastic Container Service (Amazon ECS) tasks on AWS Fargate.
"I want to access other AWS services from my Amazon Elastic Container Service (Amazon ECS) tasks on AWS Fargate.Short descriptionBefore you get started, you must identify the following:The AWS services that your Fargate tasks are trying to accessThe resources that your Fargate tasks have permissions to act onThe following example resolution is based on an application running on Fargate that includes:A Fargate task that's trying to put data into an Amazon Simple Storage Service (Amazon S3) bucketA list of objects (the resources) that will be put into the bucketNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.ResolutionCreate an S3 bucket and IAM role1.     Create an S3 bucket where you can store your data. Example bucket name: fargate-app-bucketNote: The bucket name must be unique as per S3 bucket naming requirements.2.     Create an AWS Identity and Access Management (IAM) role with permissions to access your S3 bucket. Example role name: AWS-service-access-roleNote: In this example, the application is required only to put objects into an S3 bucket and list those objects. For more information on the trust relationship of the IAM role, see Creating an IAM role and policy for your tasks.Example IAM policy:{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject" ], "Resource": "arn:aws:s3:::fargate-app-bucket/*" } ]}Create an Amazon ECS cluster and task definition1.     Create an Amazon ECS cluster on Fargate using either the AWS Management Console or the AWS CLI in your AWS Region.2.     Create a task definition using the Fargate launch type with a task role name inside the task role.Important: In your task definition, set the task role parameter to the IAM role that you created earlier. This task role is used by the container to access AWS services.Use the task role with the Fargate containerFor more information on how the Amazon ECS container agent for Fargate works with role credentials, see IAM roles for tasks.1.     To query the container credentials, run the following command from inside your container:$ curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI2.     In the container definition, add the image name that the container will use.Note: For example, you can use the official docker image "amazon/aws-cli:latest" to help AWS CLI make AWS API calls.3.     In the command section for the container (inside the container definition only), run the following command to put an object into your S3 bucket:"command": [s3api, put-object, --bucket, fargate-app-bucket, --key, test-file.txt]Important: You must include the test-file.txt file in the image when the image is built. This makes sure that the file exists on the container when it runs on Fargate. The command in step 3 runs when the task runs or when the container starts.Create and run a task1.     Create a task using the task definition that you created earlier.2.     Inside your Fargate cluster, run a standalone task using a Fargate launch type and the task definition that you created earlier.Note: You can also run a task by using a service.When the task begins its lifecycle, the task first goes into RUNNING state, and then performs its job. Later, the task is STOPPED because the container is only responsible for running a single AWS CLI command.You can view the stopped task in Amazon CloudWatch Logs. The log shows output similar to the following:{"ETag": "\"d41d8cd98f00b204e9800998ecf8427e\""}Note: If you look in the S3 bucket later, you can see that the object test-file.txt is successfully generated.Check to see what happens when you don't use the task role inside the task definition1.     Create a new revision of the task definition that you created earlier, and set the value of the task role to None.2.     Run the task again with your new revision of the task definition.After the task completes its lifecycle, you can use CloudWatch Logs to see output similar to the following:Unable to locate credentials. You can configure credentials by running "aws configure".Note: To access other AWS services from your Fargate tasks, you must create an IAM role with permissions to access the services. Then, you must use this role within the task definition (in the task role parameter) to give the container access to the AWS services.Important: The environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is available only to PID 1 processes within a container. If the container is running multiple processes or init processes (such as a wrapper script, start script, or supervisord), then the environment variable is unavailable to non-PID 1 processes. Those processes might result in "Access denied" errors when they try to access AWS services. To set your environment variable so that it's available to non-PID 1 processes, export the environment variable in the .profile file. For example, run the following command to export the variable in the Dockerfile for your container image:** RUN echo 'export $(strings /proc/1/environ | grep AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)' >> /root/.profile **Follow"
https://repost.aws/knowledge-center/ecs-fargate-access-aws-services
Why do I get the error "HIVE_BAD_DATA: Error parsing field value '' for field X: For input string: """ when I query CSV data in Amazon Athena?
"When I query data in Amazon Athena, I get an error similar to one of the following: "HIVE_BAD_DATA: Error parsing field value for field X: For input string: "12312845691" or "HIVE_BAD_DATA: Error parsing column '0': target scale must be larger than source scale.""
"When I query data in Amazon Athena, I get an error similar to one of the following: "HIVE_BAD_DATA: Error parsing field value for field X: For input string: "12312845691" or "HIVE_BAD_DATA: Error parsing column '0': target scale must be larger than source scale."Short descriptionThere are several versions of the HIVE_BAD_DATA error. The error message might specify a null or empty input string, such as "For input string: """. For this type of error message, see Why do I get the error "HIVE_BAD_DATA: Error parsing field value '' for field X: For input string: """ when I query CSV data in Amazon Athena?Errors that specify an input string with a value occur under one of the following conditions:The data type that's defined in the table definition doesn't match the actual source data.A single field contains different types of data, such as an integer value for one record and a decimal value for another record.ResolutionIt's a best practice to use only one data type in a column. Otherwise, the query might fail. To resolve errors, be sure that each column contains values of the same data type and that the values are in the allowed ranges.If you still get errors, then change the column's data type to a compatible data type that has a higher range. If changing the data type doesn't solve the problem, then try the solutions in the following examples.Example 1Source format: JSONIssue: In the last record, the id key value is "0.54." This key value is the DECIMAL data type. For the other records, the id key value is set to INT.Source data:{ "id" : 50, "name":"John" }{ "id" : 51, "name":"Jane" }{ "id" : 53, "name":"Jill" }{ "id" : 0.54, "name":"Jill" }Data Definition Language (DDL) statement:CREATE EXTERNAL TABLE jsontest_error_hive_bad_data ( id INT, name STRING)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'WITH SERDEPROPERTIES ( 'ignore.malformed.json' = 'true')LOCATION 's3://awsexamplebucket/jsontest_error_hive_bad_data/';Data Manipulation Language (DML) statement:SELECT * FROM jsontest_error_hive_bad_dataError:Your query has the following error(s):HIVE_BAD_DATA: Error parsing field value '0.54' for field 0: For input string: "0.54"This query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: bd50793b-94fc-42a7-b131-b7c81da273b2.To resolve this issue, redefine the id column as STRING. The STRING data type can correctly represent all values in this dataset. Example:CREATE EXTERNAL TABLE jsontest_error_hive_bad_data_correct_id_data_type ( id STRING, name STRING)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'WITH SERDEPROPERTIES ( 'ignore.malformed.json' = 'true')LOCATION 's3://awsexamplebucket/jsontest_error_hive_bad_data/';DML statement:SELECT * FROM jsontest_error_hive_bad_data_correct_id_data_typeYou can also cast to the desired data type. For example, you can cast a string as an integer. However, depending on the data types that you're casting from and to, this might return null or inaccurate results. Values that can't be cast are discarded. For example, casting the string value "0.54" to INT returns null results:SELECT TRY_CAST(id AS INTEGER) FROM jsontest_error_hive_bad_data_correct_id_data_typeExample output:Results _col01 502 513 534The output shows that the value "0.54" was discarded. You can't cast that value directly from a string to an integer. To resolve this issue, use COALESCE (from the Presto website) to cast the mixed type values in the same column as the output. Then, allow the aggregate function to run on the column. Example:SELECT COALESCE(TRY_CAST(id AS INTEGER), TRY_CAST(id AS DECIMAL(10,2))) FROM jsontest_error_hive_bad_data_correct_id_data_typeOutput:Results _col01 50.002 51.003 53.004 0.54Run aggregate functions:SELECT SUM(COALESCE(TRY_CAST(id AS INTEGER), TRY_CAST(id AS DECIMAL(10,2)))) FROM jsontest_error_hive_bad_data_correct_id_data_typeOutput: _col01 154.54Example 2Source format: JSONIssue: The id column is defined as INT. Athena couldn't parse "49612833315" because the range for INT values in Presto is -2147483648 to 2147483647.Source data:{ "id" : 50, "name":"John" }{ "id" : 51, "name":"Jane" }{ "id" : 53, "name":"Jill" }{ "id" : 49612833315, "name":"Jill" }DDL statement:CREATE EXTERNAL TABLE jsontest_error_hive_bad_data_sample_2 ( id INT, name STRING)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'WITH SERDEPROPERTIES ( 'ignore.malformed.json' = 'true')LOCATION 's3://awsexamplebucket/jsontest_error_hive_bad_data_2/';DML statement:SELECT * FROM jsontest_error_hive_bad_data_sample_2Error:Your query has the following error(s):HIVE_BAD_DATA: Error parsing field value '49612833315' for field 0: For input string: "49612833315"This query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 05b55fb3-481a-4012-8c0d-c27ef1ee746f.To resolve this issue, define the id column as BIGINT, which can read the value "49612833315." For more information, see Integer types.Modified DDL statement:CREATE EXTERNAL TABLE jsontest_error_hive_bad_data_sample_2_corrected ( id BIGINT, name STRING)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'WITH SERDEPROPERTIES ( 'ignore.malformed.json' = 'true')LOCATION 's3://awsexamplebucket/jsontest_error_hive_bad_data_2/';Example 3Source format: JSONIssue: The input data is DECIMAL, and the column is defined as DECIMAL in the table definition. However, the scale is defined as 2, and 2 doesn't match the "0.000054" value. For more information, see DECIMAL or NUMERIC type.Source data:{ "id" : 0.50, "name":"John" }{ "id" : 0.51, "name":"Jane" }{ "id" : 0.53, "name":"Jill" }{ "id" : 0.000054, "name":"Jill" }DDL statement:CREATE EXTERNAL TABLE jsontest_error_hive_bad_data_sample_3( id DECIMAL(10,2), name STRING)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'WITH SERDEPROPERTIES ( 'ignore.malformed.json' = 'true')LOCATION 's3://awsexamplebucket/jsontest_error_hive_bad_data_3/';DML statement:SELECT * FROM jsontest_error_hive_bad_data_sample_3Error:Your query has the following error(s):HIVE_BAD_DATA: Error parsing column '0': target scale must be larger than source scaleThis query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 1c3c7278-7012-48bb-8642-983852aff999.To resolve this issue, redefine the column with a scale that captures all input values. For example, instead of (10,2), use (10,7).CREATE EXTERNAL TABLE jsontest_error_hive_bad_data_sample_3_corrected( id DECIMAL(10,7), name STRING)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'WITH SERDEPROPERTIES ( 'ignore.malformed.json' = 'true')LOCATION 's3://awsexamplebucket/jsontest_error_hive_bad_data_3/';Follow"
https://repost.aws/knowledge-center/athena-hive-bad-data-error-csv
How can I get my worker nodes to join my Amazon EKS cluster?
My worker nodes won't join my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
"My worker nodes won't join my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.Short descriptionTo get your worker nodes to join your Amazon EKS cluster, complete the following steps:Use AWS Systems Manager automation runbook to Identify common issues.Confirm that you have DNS support for your Amazon Virtual Private Cloud (Amazon VPC).Confirm that your instance profile's worker nodes have the correct permissions.Configure the user data for your worker nodes.Verify that networking is configured correctly for your Amazon VPC subnets.Verify that your worker nodes are in the same VPC as your EKS cluster.Update the aws-auth ConfigMap with the NodeInstanceRole of your worker nodes.Meet the security group requirements of your worker nodes.Set the tags for your worker nodes.Confirm that your worker nodes can reach the API server endpoint for your EKS cluster.Confirm that the cluster role is correctly configured for your EKS cluster.For AWS Regions that support AWS Security Token Service (AWS STS) endpoints, confirm that the Regional AWS STS endpoint is activated.Be sure that the AMI is configured to work with Amazon EKS and includes the required components.Use SSH to connect connect to your worker node's Amazon Elastic Compute Cloud (Amazon EC2) instance, and then search through kubelet agent logs for errors.Use the Amazon EKS log collector script to troubleshoot errors.Important: The following steps don't include configurations that are required to register worker nodes in environments where the following criteria aren't met:In the VPC for your cluster, the configuration parameter domain-name-servers is set to AmazonProvidedDNS. For more information, see DHCP options sets.You're using an Amazon EKS-optimized Linux Amazon Machine Image (AMI) to launch your worker nodes.Note: The Amazon EKS-optimized Linux AMI provides all necessary configurations, including a /etc/eks/bootstrap.sh bootstrap script, to register worker nodes to your cluster.ResolutionUse the Systems Manager automation runbook to identify common issuesUse the AWSSupport-TroubleshootEKSWorkerNode runbook to find common issues that prevent worker nodes from joining your cluster.Important: For the automation to work, your worker nodes must have permission to access Systems Manager and have Systems Manager running. To grant this permission, attach the AmazonSSMManagedInstanceCore policy to the AWS Identity and Access Management (IAM) role. This is the IAM role that corresponds to your Amazon EC2 instance profile. This is the default configuration for Amazon EKS managed node groups that you create through eksctl. Use the following format for your cluster name: [-a-zA-Z0-9]{1,100}$.Open the runbook.Check that the AWS Region in the AWS Management Console is set to the same Region as your cluster.Note: Review the Document details section of the runbook for more information about the runbook.In the Input parameters section, specify the name of your cluster in the ClusterName field and Amazon EC2 instance ID in the WorkerID field.(Optional) In the AutomationAssumeRole field, specify the IAM role to allow Systems Manager to perform actions. If you don't specify it, then the IAM permissions of your current IAM entity are used to perform the actions in the runbook.Choose Execute.Check the Outputs section to see why your worker node isn't joining your cluster and steps that you can take to resolve it.Confirm that you have DNS support for your Amazon VPCConfirm that the Amazon VPC for your EKS cluster has DNS hostnames and DNS resolution turned on.To check these attributes and turn these on, complete the following steps:Open the Amazon VPC console.In the navigation pane, choose Your VPCs.Select the VPC that you want to edit.Under the Details tab, check if DNS hostnames and DNS resolution are turned on.If they aren't turned on, select Enable for both attributes.Choose Save changes.For more information, see View and update DNS attributes for your VPC.Confirm that your instance profile's worker nodes have the right permissionsAttach the following AWS managed polices to the role that's associated with your instance profile's worker nodes:AmazonEKSWorkerNodePolicyAmazonEKS_CNI_PolicyAmazonEC2ContainerRegistryReadOnlyTo attach policies to roles, see Adding IAM identity permissions (console).Configure the user data for your worker nodesNote: If you use AWS CloudFormation to launch your worker nodes, then you don't have to configure the user data for your worker nodes. Instead, follow the instructions for launching self-managed Amazon Linux nodes in the AWS Management Console.If you launch your worker nodes using managed node groups, then you don't have to configure any user data with Amazon EKS optimized Amazon Linux AMIs. You must configure the user data only if you use custom AMIs to launch your worker nodes through managed node groups.If you're using Amazon managed node groups with custom launch template, then specify the correct user data in the launch template. If the Amazon EKS cluster is a fully private cluster that uses VPC endpoints to make connections, then specify the following in the user data:certificate-authorityapi-server-endpointDNS cluster IPIf required, provide user data to pass arguments to the bootstrap.sh file included with an Amazon EKS optimized Linux/Bottlerocket AMI.To configure user data for your worker nodes, specify the user data when you launch your Amazon EC2 instances.For example, if you use a third-party tool such as Terraform, then update the User data field to launch your EKS worker nodes:#!/bin/bashset -o xtrace/etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}Important:Replace ${ClusterName} with the name of your EKS cluster.Replace ${BootstrapArguments} with additional bootstrap values, or leave this property blank.Verify that networking is configured correctly for your Amazon VPC subnetsIf you use an internet gateway, then be sure that it's attached to the route table correctly without any blackhole.If you use a NAT gateway, then be sure that it's configured correctly in a public subnet. Also, verify that the route table doesn’t contain any blackhole.If you use VPC private endpoints for a fully private cluster, then be sure that you have the following endpoints:com.amazonaws.region.ec2 (interface endpoint)com.amazonaws.region.ecr.api (interface endpoint)com.amazonaws.region.ecr.dkr (interface endpoint)com.amazonaws.region.s3 (gateway endpoint)com.amazonaws.region.sts (interface endpoint)Pods that you configure with IAM roles for service accounts acquire credentials from an AWS Security Token Service (AWS STS) API call. If there's no outbound internet access, then you must create and use an AWS STS VPC endpoint in your VPC.The security group for the VPC endpoint must have an inbound rule that allows traffic from port 443. For more information, see Control traffic to resources using security groups.Be sure that the policy that's attached to the VPC endpoint has the required permissions.Note: If you're using any other AWS service, then you must create those endpoints. For some commonly used services and endpoints, see Private cluster requirements. Also, you might create an endpoint service based on your use case.Verify that your worker nodes are in same Amazon VPC as your EKS clusterOpen the Amazon EKS console.Choose Clusters, and then select your cluster.In the Networking section, identify the subnets that are associated with your cluster.Note: You can configure different subnets to launch your worker nodes in. The subnets must exist in the same Amazon VPC and be appropriately tagged. Amazon EKS automatically manages tags only for subnets that you configure during cluster creation. Therefore, make sure that you tag the subnets appropriately.For more information, see Subnet requirements and considerations.Update the aws-auth ConfigMap with the NodeInstanceRole of your worker nodesVerify that the aws-auth ConfigMap is correctly configured with your worker node's IAM role and not the instance profile.To check the aws-auth ConfigMap file, run the following command:kubectl describe configmap -n kube-system aws-authIf the aws-auth ConfigMap isn't configured correctly, then you see the following error:571 reflector.go:153\] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458 : Failed to list \*v1.Node: UnauthorizedMeet the security group requirements of your worker nodesConfirm that your control plane's security group and worker node security group are configured with settings that are best practices for inbound and outbound traffic. Also, confirm that your custom network ACL rules are configured to allow traffic to and from 0.0.0.0/0 for ports 80, 443, and 1025-65535.Set the tags for your worker nodesFor the Tag property of your worker nodes, set Key to kubernetes.io/cluster/clusterName and set Value to owned.For more information, see VPC requirements and considerations.Confirm that your worker nodes can reach the API server endpoint for your EKS clusterConsider the following points:You can launch worker nodes in a subnet that's associated with a route table that routes to the API endpoint through a NAT or internet gateway.If you launch your worker nodes in a restricted private network, then confirm that your worker nodes can reach the EKS API server endpoint.If you launch worker nodes with an Amazon VPC that uses a custom DNS instead of AmazonProvidedDNS, then they might not resolve the endpoint. An unresolved endpoint happens when public access to the endpoint is deactivated, and only private access is activated. For more information, see Turning on DNS resolution for Amazon EKS cluster endpoints.Confirm that the cluster role is correctly configured for your clusterYour cluster must have the cluster role with the minimum AmazonEKSClusterPolicy permission. Also, the trust relationship of your cluster must allow eks.amazonaws.com service for sts:AssumeRole.Example:{ "Version": "2012-10-17", "Statement": \[ { "Effect": "Allow", "Principal": { "Service": "eks.amazonaws.com" }, "Action": "sts:AssumeRole" } \]}For more information, see Amazon EKS cluster IAM role.Confirm that Regional STS endpoints are activatedIf the cluster is in a Region that supports STS endpoints, then activate the Regional STS endpoint to authenticate the kubelet. The kubelet can then create the node object.Be sure that the AMI is configured to work with EKS and includes the required componentsIf the AMI used for worker nodes isn't the Amazon EKS optimized Amazon Linux AMI, then confirm that the following Kubernetes components are in active state:kubeletAWS IAM AuthenticatorDocker (Amazon EKS version 1.23 and earlier)containerdConnect to your EKS worker node instance with SSH and check kubelet agent logsThe kubelet agent is configured as a systemd service.1.    To validate your kubelet logs, run the following command:journalctl -f -u kubelet2.    To resolve any issues, check the Amazon EKS troubleshooting guide for common errors.Use Amazon EKS log collector script to troubleshoot errorsYou can use the log files and operating system logs to troubleshoot the issues in your Amazon EKS.You must use SSH to connect into the worker node with the issue and run the following script:curl -O https://raw.githubusercontent.com/awslabs/amazon-eks-ami/master/log-collector-script/linux/eks-log-collector.sh sudo bash eks-log-collector.shRelated informationHow do I troubleshoot Amazon EKS managed node group creation failures?Follow"
https://repost.aws/knowledge-center/eks-worker-nodes-cluster
How do I provide feedback on or report errors in AWS documentation?
"I noticed something that's incorrect in AWS documentation, or I have a suggestion for improving it."
"I noticed something that's incorrect in AWS documentation, or I have a suggestion for improving it.ResolutionAWS updates the AWS Documentation pages at docs.aws.amazon.com in response to feedback. Choose the Provide feedback button on any page on docs.amazon.com, and enter your detailed feedback.AWS also updates articles in the AWS Knowledge Center in response to feedback. To provide feedback to AWS on a Knowledge Center article, choose the Submit feedback or Let us know button on the article page.Related informationAWS DocumentationFollow"
https://repost.aws/knowledge-center/feedback-errors-documentation
How do I resolve resource records in my private hosted zone using Client VPN?
I'm creating an AWS Client VPN endpoint. I need to allow end users (clients connected to Client VPN) to query resource records hosted in my Amazon Route 53 private hosted zone. How can I do this?
"I'm creating an AWS Client VPN endpoint. I need to allow end users (clients connected to Client VPN) to query resource records hosted in my Amazon Route 53 private hosted zone. How can I do this?ResolutionTo allow end users to query records in a private hosted zone using Client VPN:Confirm that you've enabled "DNS resolution" and "DNS hostnames" in your Amazon Virtual Private Cloud (Amazon VPC). These settings must be enabled to access private hosted zones. For more information, see View and update DNS attributes for your VPC.Create a Client VPN endpoint, if you haven't already. Be sure to configure the "DNS Server IP address" parameter with the DNS server IP address that can be reached by the end users for the DNS resolution queries. Or, you can modify an existing Client VPN endpoint to update the DNS server settings.Depending on your server configuration and the values that you specify for the "DNS Server IP address" parameter, the resolution of the private hosted zone domain varies:With the Amazon DNS server (VPC IPv4 network range plus two) – End users can resolve the resource records of the private hosted zone associated with the VPC.With a custom DNS server located in the same VPC as the Client VPN endpoint's associated VPC – You can configure the custom DNS server to serve DNS queries as required. To resolve the resource records, configure the custom DNS server as a forwarder to forward DNS queries for the private hosted domain to the default VPC DNS resolver. To use the custom DNS server for all resources in the VPC, be sure to configure the DHCP options accordingly.Note: The custom DNS server might also reside in a peered VPC. In that case, the custom DNS server configuration is the same as the above. Be sure to associate your private hosted zone to both of the VPCs.With a custom DNS server located on-premises, and the "DNS Server IP address" parameter in the Client VPN disabled/blank – The DNS queries for the private hosted zone domain are forwarded to the Route 53 inbound resolver. You must create conditional forwarding rules in the on-premises custom DNS server to forward queries to the IP address of the Route 53 inbound resolver in the VPC over AWS Direct Connect or AWS Site-to-Site VPN.Note: If the client device doesn't have a route to the local DNS server when the Client VPN connection is established, then the DNS queries fail. In this case, you must manually add a preferred static route to the custom on-premises DNS server on the client device’s route table.With the "DNS Server IP address" parameter disabled – The client device uses the local DNS resolver to resolve DNS queries. If your local resolver is set to a public DNS resolver, then you can't resolve records in private hosted zones.Note: The following pertains to each of the four types of DNS server configurations:If full-tunnel mode is enabled, then a route for all traffic through the VPN tunnel is added to the client device's route table. End users can connect to the internet if the authorization rules and respective routes are added to the Client VPN endpoint's associated subnet route table.If split-tunnel mode is enabled, then the routes in the Client VPN endpoint's route table are added to the client device's route table.Related informationHow does DNS work with my AWS Client VPN endpoint?Follow"
https://repost.aws/knowledge-center/client-vpn-resolve-resource-records
How do I troubleshoot "Unable to verify secret hash for client <client-id>" errors from my Amazon Cognito user pools API?
"When I try to invoke my Amazon Cognito user pools API, I get an "Unable to verify secret hash for client <client-id>" error. How do I resolve the error?"
"When I try to invoke my Amazon Cognito user pools API, I get an "Unable to verify secret hash for client <client-id>" error. How do I resolve the error?Short descriptionWhen a user pool app client is configured with a client secret in the user pool, a SecretHash value is required in the API's query argument. If a secret hash isn't provided in the APIs query argument, then Amazon Cognito returns an **Unable to verify secret hash for client <client-id>**error.The following example shows how to create a SecretHash value and include it in either an InitiateAuth or ForgotPassword API call.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.To create a SecretHash valueFollow the instructions in Computing SecretHash values. You'll need your app client ID, app client secret, and the user name of the user in your Amazon Cognito user pool.-or-To automate the process, do the following:1.    If you haven't done so already, install Python.2.    Save the following example Python script as a .py file:import sysimport hmac, hashlib, base64username = sys.argv[1]app_client_id = sys.argv[2]key = sys.argv[3]message = bytes(sys.argv[1]+sys.argv[2],'utf-8')key = bytes(sys.argv[3],'utf-8')secret_hash = base64.b64encode(hmac.new(key, message, digestmod=hashlib.sha256).digest()).decode()print("SECRET HASH:",secret_hash)Note: Replace the following values before running the example script: For username, enter the user name of the user in the user pool. For app_client_id, enter your user pool's app client ID. For key, enter your app client's secret.3.    Run the following command to run the script:python3 secret_hash.py <username> <app_client_id> <app_client_secret>Note: Replace the following values before running the command: If you're running a version of Python earlier than Python 3.0, replace python3 with python. For secret_hash.py, enter the file name of the example script. For username, enter the user pool username. For app_client_id, enter your app client ID For app_client_secret, enter your app client's secret.The command response returns a SecretHash value.To include SecretHash values in API callsNote: A SecretHash value isn't required in Amazon Cognito API calls if your app client isn't configured with an app client secret. For more information, see Configuring a user pool app client.Add the SecretHash value you created as a SECRET_HASH parameter in the query string parameters of the API call.Example InitiateAuth API call that includes a SECRET_HASH parameter$ aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --auth-parameters USERNAME=<username>,PASSWORD=<password>,SECRET_HASH=<secret_hash> --client-id <client-id>Example InitiateAuth API call response{ "ChallengeParameters": {}, "AuthenticationResult": { "AccessToken": "<HIDDEN>", "ExpiresIn": 3600, "TokenType": "Bearer", "RefreshToken": "<HIDDEN>", "IdToken": "<HIDDEN>" }}Note: If you're using USER_PASSWORD_AUTH authentication flow, make sure that ALLOW_USER_PASSWORD_AUTH is turned on.Example ForgotPassword API call that includes a SECRET_HASH parameter$ aws cognito-idp forgot-password --client-id <client-id> --username <username> --secret-hash <secret-hash>Example ForgotPassword API call response{ "CodeDeliveryDetails": { "Destination": "+***********", "DeliveryMedium": "SMS", "AttributeName": "phone_number" }}Follow"
https://repost.aws/knowledge-center/cognito-unable-to-verify-secret-hash
Why am I unable to authenticate to my WorkSpace using the WorkSpaces client?
"When I try to log in using the Amazon WorkSpaces client, I see error messages similar to the following:"Authentication Failed: Please check your username and password to make sure you typed them correctly.""Directory Unavailable: Your directory could not be reached at this time. Please contact your Administrator for more details."I've confirmed that the password is entered correctly and the directory is available. Why am I still getting these error messages?"
"When I try to log in using the Amazon WorkSpaces client, I see error messages similar to the following:"Authentication Failed: Please check your username and password to make sure you typed them correctly.""Directory Unavailable: Your directory could not be reached at this time. Please contact your Administrator for more details."I've confirmed that the password is entered correctly and the directory is available. Why am I still getting these error messages?ResolutionAuthentication Failed errorsAuthentication Failed errors that occur when the correct credentials are used are typically related to a configuration issue in Active Directory.To troubleshoot this error, follow these steps:Confirm that the directory registration code in the WorkSpaces client matches the value associated with the WorkSpace1.    Open the WorkSpaces client. From the log-in window, choose Settings, Manage Login Information. Note the registration code.Note: If you have multiple registration codes, close the pop-up window, and then choose Change Registration Code.2.    Confirm that the registration code matches the value associated with the WorkSpace in the WorkSpaces console or welcome email.Note: To find the registration code from the console, open the WorkSpaces console to see a list of WorkSpaces in the selected AWS Region. Choose the arrow next to the WorkSpace ID to show the WorkSpace details, and then note the Registration Code.Check if the error is due to incorrect credentials or due to an error in WorkSpaces1.    Connect to the WorkSpace using a Remote Desktop Protocol (RDP) client or connect to the WorkSpace using SSH.2.    Enter your credentials. When you receive an Authentication Failed error, you can see whether the error is caused by:Incorrect credentials.An issue with the WorkSpace.A broken trust relationship with Active Directory.Another issue with an Active Directory user account.Proceed to troubleshoot the observed error in the RDP/SSH session of the WorkSpace according to the error that you received.Note: If you can't use RDP/SSH in WorkSpaces due to security or compliance reasons, then try to log in to any domain-joined Amazon Elastic Compute Cloud (Amazon EC2) instance using your WorkSpace user credentials for validation.Verify that the user's Active Directory user object meets the prerequisites1.    Make sure that Kerberos pre-authentication is turned on.2.    Clear the User must change password on next logon check box.3.    Run the following command to confirm that the user’s password isn’t expired, replacing username with your value:net user username /domain4.    If you use Simple AD or AWS Directory Service for Microsoft Active Directory, then choose Forgot Password? from the WorkSpaces client to reset the password.Confirm that the user object's sAMAccountName attribute wasn't modifiedWorkSpaces doesn’t support modifications to the username attribute of an Active Directory user. Authentication fails if the username attribute in WorkSpaces and Active Directory don’t match.If you changed the sAMAccountName, you can change it back. The WorkSpace resumes working correctly.If you must rename a user, follow these steps:Warning: Deleting a WorkSpace is a permanent action. The WorkSpace user's data doesn't persist and is destroyed.1.    Back up files from the user volume to an external location, such as Amazon WorkDocs or Amazon FSx.2.    Delete the WorkSpace.3.    Modify the username attribute.4.    Launch a new WorkSpace for the user.Verify that the username attribute doesn't contain characters that aren't validUsername attribute character restrictions exist for Amazon Web Services (AWS) applications, including WorkSpaces. See Understand username restrictions for AWS applications to confirm that your username attribute uses only valid characters.If your WorkSpaces username attribute contains characters that aren't valid, follow these steps:Warning: Deleting a WorkSpace is a permanent action. The WorkSpace user's data doesn't persist and is destroyed.1.    Back up files from the user volume to an external location, such as WorkDocs or Amazon FSx.2.    Delete the WorkSpace.3.    Rename the username attribute in your domain using valid characters.Use the Active Directory Users and Computers tool to find the user.Open the context (right-click) menu for the user, and choose Properties.From the Account tab, rename both User logon name and User logon name (pre-Windows 2000).4.    Launch a new WorkSpace with the new username attribute.Verify that there isn't a time difference of more than 5 minutes across involved partiesAuthentication is sensitive to time differences with all involved parties. All domain controllers in the domain, the Remote Authentication Dial-In User Service (RADIUS) servers (if used), the WorkSpace instance, and the service itself must be in sync with each other.1.    If you're using multi-factor authentication (MFA), verify that the clock on all RADIUS servers is in sync with a reliable time source. (For example, pool.ntp.org.)2.    If the directory is customer managed (such as AD Connector), then verify that every domain controller is in sync with a reliable time source.3.    If you suspect that the time on the WorkSpace is inaccurate, reboot the WorkSpace. A reboot resynchronizes the WorkSpace with an atomic clock. After a few minutes, the WorkSpace also resynchronizes with a domain controller.4.    Run the following commands to verify the time against a reliable time source:Linux:ntpdate -q -u pool.ntp.orgWindows:w32tm.exe /stripchart /computer:pool.ntp.orgDirectory Unavailable errorsDirectory Unavailable errors that occur when the directory is available are typically related to an MFA configuration issue.To troubleshoot this error, follow these steps:Confirm that your RADIUS server is running and review the logs to confirm that authentication traffic is being approvedA Directory Unavailable error can occur if your configured RADIUS server isn't running or if network modifications prevent the RADIUS server from communicating with your domain controllers used by WorkSpaces.If you're using an AD Connector, your AD Connector's networking configuration must allow outbound access to your domain controllers and your RADIUS server. You can use VPC Flow Logs to confirm that all necessary traffic is sent to its destination.You can temporarily turn off MFA on the registered directory and confirm if you're able to log in without MFA turned on. If you're able to log in after turning off MFA, this confirms a configuration issue related to the RADIUS server.Related informationAdminister WorkSpace usersFollow"
https://repost.aws/knowledge-center/workspaces-authentication-error
How can I revoke my ACM public certificate?
How can I revoke an AWS Certificate Manager (ACM) public certificate?
"How can I revoke an AWS Certificate Manager (ACM) public certificate?Short descriptionIf you no longer need your ACM public certificate, you can delete the certificate. If you need to revoke your ACM public certificate for compliance reasons, AWS Support can do this on your behalf. Important: Revoked ACM public certificates can't be used again with the same serial number.ResolutionSubmit a request to AWS Support to revoke the public certificateFollow the instructions to create a support case in the Support Center of the AWS Management Console.For emailed validated certificates, an email that looks similar to the following is sent to three registered addresses in WHOIS and the five common domain name addresses:Amazon Trust Services has been requested to revoke the followingcertificate. If you requested this revocation, please respond to thisemail with I approve.Domain: <DOMAIN>AWS account ID: <AWS Account ID>AWS Region name: <REGION>Certificate identifier: <CERTIFICATE IDENTIFIER>Sincerely,Amazon Trust ServicesFor DNS validated certificates, you might be contacted by AWS Support to add a unique TXT record in the DNS database to verify domain ownership.After receiving the requested information and domain ownership is confirmed, AWS Support revokes the public certificate.Verify that the ACM public certificate is revoked with OpenSSLNote: If you receive errors when running OpenSSL commands, make sure that you’re using the most recent OpenSSL version.1.    Get the certificate file information for your domain and save the output to a .pem file:$ openssl s_client -connect example.com:443 2>&1 < /dev/null | sed -n '/-----BEGIN/,/-----END/p' > example.pem2.    Check if the certificate has an Online Certificate Status Protocol (OCSP) URI:$ openssl x509 -noout -ocsp_uri -in example.pemOutput:http://ocsp.rootca1.amazontrust.com3.    Capture the certificate chain:$ openssl s_client -connect example.com:443 -showcerts 2>&1 < /dev/null4.    Save the .pem file.5.    Send an OCSP request similar to the following:openssl ocsp -issuer chain.pem -cert example.pem -url http://ocsp.rootca1.amazontrust.comOutput:Response verify OKexample.pem: revokedThis Update: Apr 9 03:02:45 2014 GMTNext Update: Apr 10 03:02:45 2014 GMTRevocation Time: Mar 25 15:45:55 2014 GMTIn the output, note that the response is revoked.Related informationBest practicesFollow"
https://repost.aws/knowledge-center/revoke-acm-public-certificate
How do I connect my Amazon SageMaker Studio notebook with an Amazon Redshift cluster?
I want to connect my Amazon SageMaker Studio notebook with an Amazon Redshift cluster.
"I want to connect my Amazon SageMaker Studio notebook with an Amazon Redshift cluster.ResolutionPublicly accessible clusterIf the Redshift cluster is publicly accessible, then you can access the cluster from either of the following:A SageMaker domain launched with public internet only and no Amazon Virtual Private Cloud (Amazon VPC) accessA SageMaker Studio domain launched in an Amazon VPCIf the Redshift cluster is in a different VPC, then configure a VPC peering connection to make sure that Studio can access the cluster.Private clusterIf the Redshift cluster is private, then you can access the cluster only through a SageMaker Studio domain launched in an Amazon VPC. If the cluster is in a different VPC, configure a VPC peering connection to make sure that Studio can access the cluster.Additional requirementsBe sure that the following requirements are met for both types of clusters:The security group attached to the SageMaker Studio allows outbound traffic to ephemeral ports. When a Studio client connects to a Redshift server, a random port from the ephemeral port range (1024-65535) becomes the client's source port.The security group attached to the Redshift cluster allows inbound connection from the security group attached to the SageMaker Studio domain on port 5439.If you configured custom DNS, verify that the DNS server used by the Studio VPC can resolve the hostname of the Redshift cluster.Related informationConnect to an external data sourceUsing the Amazon Redshift data API to interact from an Amazon SageMaker Jupyter notebookRead the Docs documentation for Ingest data with RedshiftFollow"
https://repost.aws/knowledge-center/sagemaker-studio-redshift-connect
How can I resolve an “Access Error” after configuring my VPC flow log?
"After I configure my virtual private cloud (VPC) flow log, I receive an "Access Error.""
"After I configure my virtual private cloud (VPC) flow log, I receive an "Access Error."Short descriptionIf you have a permissions issue when you configure your VPC flow log, then you see the following error:"Access Error. The IAM role for your flow logs does not have sufficient permissions to send logs to the CloudWatch log group."The following scenarios commonly cause this error:The Identity and Access Management (IAM) role for your flow log doesn't have permission to publish flow log records to the Amazon CloudWatch log group.The IAM role doesn't have a trust relationship with the flow logs service.The trust relationship doesn't specify the flow logs service as the principal.ResolutionThe IAM role for your flow log doesn't have permission to publish flow log records to the CloudWatch log groupThe IAM role that's associated with your flow log must have sufficient permissions to publish flow logs to the specified log group in CloudWatch Logs. The IAM role must belong to your AWS account. Make sure that the IAM role has the following permissions:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "logs:DescribeLogGroups", "logs:DescribeLogStreams" ], "Resource": "*" } ]}The IAM role doesn't have a trust relationship with the flow logs serviceMake sure that your role has a trust relationship that allows the flow logs service to assume the role:1.    Log in to the IAM console.2.    Select Roles.3.    Select VPC-Flow-Logs.4.    Select Trust relationships.5.    Select Edit trust policy.6.    Delete the current code in this section, and then enter the following policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "vpc-flow-logs.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}7.    Select Update policy.Trust relationships give you control over what services are allowed to assume roles. In this example, the relationship allows the VPC Flow Logs service to assume the role.The trust relationship doesn't specify the flow logs service as the PrincipalMake sure that the trust relationship specifies the flow logs service as the Principal:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "vpc-flow-logs.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}Related informationIAM role for publishing flow logs to CloudWatch LogsTroubleshoot VPC flow logsFollow"
https://repost.aws/knowledge-center/vpc-flow-log-access-error
How do I integrate Salesforce Knowledge Base articles with Amazon Connect Wisdom?
I want to integrate my Salesforce Knowledge Base to Amazon Connect Wisdom so that contact center agents can view articles on their Contact Control Panel (CCP) dashboard. How can I set up and troubleshoot this integration?
"I want to integrate my Salesforce Knowledge Base to Amazon Connect Wisdom so that contact center agents can view articles on their Contact Control Panel (CCP) dashboard. How can I set up and troubleshoot this integration?Short descriptionUse Amazon Connect Wisdom to integrate knowledge base articles from Salesforce. Agents can view these articles on the CCP dashboard.Before starting, make sure that your SalesForce knowledge repository is set up and you that have created at least one article for testing purposes.ResolutionConfigure Amazon Connect WisdomNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.1.    Add an integration in Amazon Connect Wisdom, selecting Salesforce as the Source.    For instance URL, enter your Salesforce domain name. To find your Salesforce domain name, log in to your Salesforce account, and then choose View Profile. Your domain name is listed in the format: https://example.com-dev-ed.my.salesforce.com.2.    Select objects and fields.        For Select Fields for [object name], select attributes that display with the Knowledge Base article, such as ArticleNumber and ArticleCreationDate.3.    Review and verify the integration.Note: After you set up an integration, you can't edit the details. To update details, you must create a new integration.To verify fields that aren't visible through the AWS Console, use the following command. Replace knowledge-base-id with the ID number of your knowledge base.aws wisdom list-contents --region us-east-1 --knowledge-base-id xxxThe output looks similar to this:"metadata": { "ArticleNumber": "000001003", "Id": "ka02w000001RQGHAA4", "IsDeleted": "false", "PublishStatus": "Online", "Title": "Demo", "VersionNumber": "1", "aws:wisdom:externalVersion": "1" }4.    Add a Wisdom block to your contact flow.5.    To test the integration, access the CCP dashboard using the following URL, replacing connect-instance-alias with your alias: https://connect-instance-alias.my.connect.aws/agent-app-v2/.6.    In Search Wisdom, enter a knowledge base article reference ID or name. If the integration is complete, then the article appears.Note: You can add only one integration per domain. To create more integrations, request a limit increase through AWS Support.Troubleshoot knowledge base articles on the CCP dashboardIf you can't see knowledge base articles on the CCP console, use the following troubleshooting steps.Confirm the article and integration settingsConfirm that the article is published in Salesforce.Check the ingestion settings to determine if the ingestion is configured to import the records after a specific time and date. These settings are located in the Wisdom integration settings in the Amazon Connect console.Confirm that the Amazon Connect Wisdom knowledge base ID has contents associated1.    Run the following command to find the knowledge-base-id:aws connect list-integration-associations --instance-id xxxxx2.    Run the following command to get the Knowledge Base article ID. Replace your-knowledge-base-id with the knowledge-base-id that you found in the previous step.{ "IntegrationAssociationId": "xxx", "IntegrationAssociationArn": "arn:aws:connect:us-east-1:xxx:instance/xxx/integration-association/xxx", "InstanceId": "xxx", "IntegrationType": "WISDOM_KNOWLEDGE_BASE", "IntegrationArn": "arn:aws:wisdom:us-east-1:xxxx:knowledge-base/your-knowledge-base-id" }3.    Run the following command to list all the articles integrated with the knowledge base integration. Replace your-knowledge-base-id with the knowledge-base-id value that you found previously.aws wisdom list-contents --region us-east-1 --knowledge-base-id your-knowledge-base-idThe output looks similar to the following. In this example, Demo is the name of the knowledge base article. If the command results in a NULL value, then check your settings to confirm that you associated the correct knowledge-base-id.{ "contentSummaries": [ { "contentArn": "arn:aws:wisdom:us-east-1:xxx:content/xxx/xxx", "contentId": "xxx", "contentType": "application/x.wisdom-json;source=salesforce", "knowledgeBaseArn": "arn:aws:wisdom:us-east-1:xxxx:knowledge-base/your-knowledge-base id", "knowledgeBaseId": "your-knowledge-id", "metadata": { "ArticleNumber": "000001003", "Id": "ka02w000001RQGHAA4", "IsDeleted": "false", "PublishStatus": "Online", "Title": "Demo", "VersionNumber": "1", "aws:wisdom:externalVersion": "1" }, "name": "000001003", "revisionId": "xxx==", "status": "ACTIVE", "tags": {}, "title": "Demo" }Follow"
https://repost.aws/knowledge-center/connect-salesforce-integration
How do I turn on Redis Slow log in an ElastiCache for Redis cache cluster?
I want to turn on the Redis Slow log in my ElastiCache for Redis cache cluster. How can I do this?
"I want to turn on the Redis Slow log in my ElastiCache for Redis cache cluster. How can I do this?Short descriptionThe Redis Slow log feature logs queries that exceed a specified time period. The Slow log provides an option to either log slow queries to Amazon CloudWatch or to Amazon Kinesis Data Firehose. Redis Slow Log is a good tool for debugging and tracing your Redis database, especially if you're experiencing high latency and/or high CPU usage.A new entry is added to the slow log when a command exceeds the execution time set by the slowlog-log-slower-than parameter. Each log entry is delivered to the specified destination (CloudWatch or Kineses) in JSON or text format.The following are examples of each format:JSON{ "CacheClusterId": "logslowxxxxmsxj", "CacheNodeId": "0001", "Id": 296, "Timestamp": 1605631822, "Duration (us)": 0, "Command": "GET ... (1 more arguments)", "ClientAddress": "192.168.12.104:55452", "ClientName": "logslowxxxxmsxj##"}Textlogslowxxxxmsxj,0001,1605631822,30,GET ... (1 more arguments),192.168.12.104:55452,logslowxxxxmsxj##ResolutionPrerequisitesRedis Slow log requires Redis engine version 6.0 and up. If your engine version is lower than 6.0, you can manually retrieve the slow log using the slowlog get 128 command. Each node has its own slow log. So, you must collect the log from each node within the cluster.Turning on the Slow log feature during cluster creation or modification requires permission to publish to CloudWatch or Kinesis Firehose. Use the following permissions by creating an Identify and Access Management policy and attaching it to the responsible user:Amazon CloudWatch permissions:{ "Version": "2012-10-17", "Statement": [ { "Action": [ "logs:CreateLogDelivery", "logs:GetLogDelivery", "logs:UpdateLogDelivery", "logs:DeleteLogDelivery", "logs:ListLogDeliveries" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "ElastiCacheLogging" }, { "Sid": "ElastiCacheLoggingCWL", "Action": [ "logs:PutResourcePolicy", "logs:DescribeResourcePolicies", "logs:DescribeLogGroups" ], "Resource": [ "*" ], "Effect": "Allow" } ]}Amazon Kinesis Data Firehose permissions:{ "Version": "2012-10-17", "Statement": [ { "Action": [ "logs:CreateLogDelivery", "logs:GetLogDelivery", "logs:UpdateLogDelivery", "logs:DeleteLogDelivery", "logs:ListLogDeliveries" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "ElastiCacheLogging" }, { "Sid": "ElastiCacheLoggingFHSLR", "Action": [ "iam:CreateServiceLinkedRole" ], "Resource": "*", "Effect": "Allow" }, { "Sid": "ElastiCacheLoggingFH", "Action": [ "firehose:TagDeliveryStream" ], "Resource": "Amazon Kinesis Data Firehose delivery stream ARN", "Effect": "Allow" } ]}Turn on Redis slow logs in your ElastiCache clusterAfter meeting the prerequisites, you can turn on Slow log while creating your cluster or by modifying an existing cluster.For directions on turning on Redis Slow logs from the AWS console, see Specifying log delivery using the Console.Slow log contentsAfter selecting the log destination, when a query exceeds the specified time frame, the event is logged to the log destination. Each logged event contains the following content:CacheClusterId: The ID of the cache cluster.CacheNodeId: The ID of the cache node.Id: A unique progressive identifier for every slow log entry.Timestamp: The Unix timestamp at which the logged command was processed.Duration: The amount of time needed for its execution, in microseconds.Command: The command used by the client. For example, set foo bar where foo is the key and bar is the value. ElastiCache for Redis replaces the actual key name and value with (2 more arguments) to avoid exposing sensitive data.ClientAddress: Client IP address and port.ClientName: Client name if set via the CLIENT SETNAME command.Follow"
https://repost.aws/knowledge-center/elasticache-turn-on-slow-log
How do I resolve errors when issuing a new ACM-PCA certificate?
I tried requesting a new private end-entity certificate or subordinate CA for AWS Certificate Manager (ACM) and the request failed.
"I tried requesting a new private end-entity certificate or subordinate CA for AWS Certificate Manager (ACM) and the request failed.Short descriptionTo troubleshoot failed private certificate requests, check the following:The pathLenConstraint parameter of the issuing certificate authority.The status of the issuing certificate authority.The signing algorithm family of the issuing certificate authority.The validity period of the requested certificate.AWS Identity and Access Management (IAM) permissions.ResolutionThe "pathLenConstraint" parameter of the issuing certificate authorityCreating a CA with a path length greater than or equal to the path length of its issuing CA certificate returns a ValidationException error. Make sure that the pathLenConstraint for issuing an ACM subordinate CA is less than the path length of the issuing CA.The status of the issuing certificate authorityIssuing a new PCA certificate using the IssueCertificate API with an expired a CA (which isn't in Active status) returns a InvalidStateException failure code.If the signing CA is expired, make sure that you renew it first before issuing new subordinate CA certificates or ACM private certificates.The signing algorithm family of the issuing certificate authorityThe AWS Management Console doesn't support issuing private ECDSA certificates, and so the issuing CA is unavailable. This occurs even if an ECDSA private subordinate certificate authority was already created. You can use the IssueCertificate API call and specify the ECDSA variant with the --signing-algorithm flag.The validity period of the requested certificateCertificates issued and managed by ACM (those certificates that ACM generates the private key for) have a validity period of 13 months (395 days).For ACM Private CA, you can use the IssueCertificate API to apply any validity period. However, if you specify the certificate validity period longer than the issuing certificate authority, the certificate issuance fails.It's a best practice to set the CA certificate validity period to a value that's two to five times as large as the period of child or end-entity certificates. For more information, see Choosing validity periods.IAM permissionsPrivate certificates issued with IAM identities must have the required permissions, or the request fails with an "AccessDenied" error. It's a best practice to grant your IAM identities permission to issue private certificates while adhering to the principle of granting least privilege.For more information, see Identity and Access Management for AWS Certificate Manager Private Certificate Authority.Follow"
https://repost.aws/knowledge-center/acm-pca-certificate
How much downtime can I expect during a maintenance window for my RDS for SQL Server Single-AZ or Multi-AZ instance?
I have an upcoming scheduled maintenance window for my Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server Single-AZ or Multi-AZ instance. What downtime should I expect during the maintenance window?
"I have an upcoming scheduled maintenance window for my Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server Single-AZ or Multi-AZ instance. What downtime should I expect during the maintenance window?Short descriptionAWS performs periodic maintenance to the physical host hardware, database version, or the operating system of your Amazon RDS for SQL Server instance. Before maintenance occurs, you receive an email notification. This email details the affected resources, and the scheduled timeframe of the maintenance.Estimated downtime differs for Single-AZ instances and Multi-AZ instances. Also, the downtime that you experience might change depending on the type of maintenance being performed.Note: You can review past scheduled maintenance for your RDS instances by viewing RDS events. Or, you can use the describe-pending-maintenance-actions AWS Command Line Interface (AWS CLI) command or the DescribeDBInstances for the RDS API.ResolutionYou can verify if upcoming maintenance will incur downtime by checking your Personal Health Dashboard. The PHD will state if this maintenance involves downtime for the affected instances.You can check for pending or upcoming maintenance by checking the instance in the RDS console. For more information, see Viewing pending maintenance.You can adjust the time of your maintenance window. For more information, see Adjusting the preferred DB instance maintenance window.Hardware maintenance downtimeSingle-AZ instances: The instance is unavailable for several minutes while maintenance is performed. The exact time the maintenance takes varies depending on the instance class type and database size.Multi-AZ instances: Multi-AZ instances are unavailable for the time that it takes the primary instance to failover. This failover takes no more than 60 seconds. If only the secondary Availability Zone is affected by the maintenance, then there is no failover or downtime.Database engine upgrade downtimeSingle-AZ instances: The instance is unavailable for several minutes while maintenance is performed. The exact time the maintenance takes varies depending on the instance class type and database size.Multi-AZ instances: In Multi-AZ instances, both the primary and standby instances are upgraded. Amazon RDS performs rolling upgrades. So, you have an outage only for the duration of a failover after the secondary is upgraded and promoted to primary. The original primary (now the secondary) then has the upgrade applied to it.OS maintenance downtimeSingle-AZ instances: The instance is unavailable for several minutes while maintenance is performed. The exact time the maintenance takes varies depending on the instance class type and database size.Multi-AZ instances: In Multi-AZ RDS instances, both the primary and standby instances are upgraded. RDS performs rolling upgrades. So, you have an outage only for the duration of a failover after the secondary is upgraded and promoted to primary. The original primary (now the secondary) then has the upgrade applied to it.Related informationWhat do I need to know about the Amazon RDS maintenance window?Follow"
https://repost.aws/knowledge-center/rds-sql-server-maintenance-downtime
Why did my CloudTrail cost and usage increase unexpectedly?
I'm seeing an unexpected increase in cost for AWS CloudTrail in my AWS account. How do I determine what's causing the cost increase?
"I'm seeing an unexpected increase in cost for AWS CloudTrail in my AWS account. How do I determine what's causing the cost increase?Short descriptionUnexpected CloudTrail cost increases usually occur when multiple trails in the same AWS Region record the same management events. To prevent CloudTrail from logging duplicate management events, verify that your trails' Read and Write events settings are configured correctly. For more information, see Trail configuration.To identify duplicate management event records, you can use the AWS Billing and Cost Management console or Amazon Athena queries.To remove duplicate management event records, you can use the CloudTrail console or the AWS Command Line Interface (AWS CLI).To monitor your estimated and ongoing CloudTrail charges, you can use the following:Amazon CloudWatch billing alarmsAWS BudgetsNote: You can deliver one copy of your ongoing management events to Amazon Simple Storage Service (Amazon S3) for free by creating trails. Additional copies of management events incur a charge. For more information, see AWS CloudTrail pricing. To keep copies of your CloudTrail logs in multiple Amazon S3 buckets, you can manually move the data between buckets to reduce cost. For instructions, see How can I copy all objects from one Amazon S3 bucket to another bucket?ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.To identify duplicate CloudTrail management event records using the AWS Billing and Cost Management consoleOpen the AWS Billing and Cost Management console. Then, choose Bills.Choose the Bill details by service tab.In AWS Services Charges, expand CloudTrail.Expand the AWS Region to view the event cost record details. Then, review the PaidEventsRecorded metric to identify duplicate management event records.Note: The PaidEventsRecorded metric provides the total count and cost for all additional copies of management events recorded in a specific Region. The DataEventsRecorded metric provides the total count and cost for data events activated on trails in that Region. If the Region has no trails with data events activated, then the DataEventsRecorded metric doesn't appear.To identify duplicate CloudTrail management event records using Athena queriesNote: To run Athena queries on CloudTrail logs, you must have a trail created and configured to send logs to an S3 bucket. For more information, see Creating a trail.You can use Athena to view CloudTrail management events (and data events) stored in your Amazon S3 bucket.For instructions, see How do I automatically create tables in Athena to search through CloudTrail logs? Also, Creating the table for CloudTrail logs in Athena using manual partitioning.To remove duplicate CloudTrail management events from you AWS accountTo remove duplicate management events using the CloudTrail console, follow the instructions in Updating a trail.To remove duplicate management events using the AWS CLI, follow the instructions in Using update-trail.Related informationQuotas in AWS CloudTrailAnalyze security, compliance, and operational activity using CloudTrail and AthenaFollow"
https://repost.aws/knowledge-center/remove-duplicate-cloudtrail-events
How do I set up an Active/Active or Active/Passive Direct Connect connection to AWS from a private or transit virtual interface?
How do I set up an Active/Active or Active/Passive AWS Direct Connect connection to AWS from a private or transit virtual interface?
"How do I set up an Active/Active or Active/Passive AWS Direct Connect connection to AWS from a private or transit virtual interface?ResolutionScenarios with connections in the same RegionScenario 1:Both connections are in the same Region and same colocation.The same prefixes are advertised with the same Border Gateway Protocol (BGP) attributes (such as AS Path and MED) on both the connections from the on-premises location.Egress traffic from AWS to the on-premises location is load balanced based on flow (Active/Active) across both Direct Connect connections.Scenario 2:Both connections are in the same Region but in different colocations facilities.The same prefixes are advertised with the same BGP attributes (such as AS Path and MED) on both the connections from the on-premises location.Egress traffic from AWS to the on-premises location is load balanced based on flow (Active/Active) across both Direct Connect connections.Scenarios with connections in different RegionsScenario 1:Connection A (virtual interface VIF-A) is in Region 1.Connection B (virtual interface VIF-B) is in Region 2.Both virtual interfaces connect to a virtual private cloud (VPC) in Region 1 using a Direct Connect gateway.Both virtual interfaces advertise the same prefixes with the same BGP attributes (such as AS Path and MED) on both the connections from the on-premises location.Egress traffic from the VPC to the on-premises location prefers connection A because it's in the same Region as the VPC.Scenario 2:Connections are two Regions and two colocations facilities.Connection A (virtual interface VIF-A) is in Region 1.Connection B (virtual interface VIF-B) is in Region 2.Both virtual interfaces connect to a VPC in Region 3 using a Direct Connect gateway.Both virtual interfaces advertise the same prefixes with the same BGP attributes (such as AS Path and MED) from the on-premises location.Egress traffic from AWS to the on-premises location is load balanced based on flow (Active/Active) across both Direct Connect connections.Methods for more predictable routingFor more predictable routing than what's possible in the scenarios previously described, use the following methods.For Active/Passive configuration of Direct Connect connections:Apply the local preference BGP community tag. Set a higher preference to the advertised prefixes for the primary or active connection. Then, set a medium or lower preference for the passive connection.AS Path prepend using a shorter AS path on the active connection and a longer AS path on the passive connection.Note: AS Path prepending can't be used to configure Active/Passive connections in environments similar to scenario 1 of "Scenarios with connections in different Regions".Advertise the most specific route using BGP on the active connection.For Active/Active configuration of Direct Connect connections, advertise the prefixes on both Direct Connect connections with the same local preference BGP community tag.Follow"
https://repost.aws/knowledge-center/direct-connect-private-transit-interface
How can I receive an email alert when my AWS CloudFormation stack enters ROLLBACK_IN_PROGRESS status?
I want to receive an email alert when my AWS CloudFormation stack enters ROLLBACK_IN_PROGRESS status during stack creation.
"I want to receive an email alert when my AWS CloudFormation stack enters ROLLBACK_IN_PROGRESS status during stack creation.Short descriptionAfter you complete the steps in the Resolution section, you can expect the notification to work as follows:Your AWS CloudFormation stack sends all notifications to the Amazon Simple Notification Service (Amazon SNS) topic that notifies an AWS Lambda function.The Lambda function parses notifications and sends only "ROLLBACK_IN_PROGRESS" notifications to a second Amazon SNS topic that's configured for email alerts.This second SNS topic sends an email to subscribers regarding the "ROLLBACK_IN_PROGRESS" message.ResolutionCreate an SNS topic and subscription for email alerts1.    Open the Amazon SNS console.2.    In the navigation pane, choose Topics.Note: To use an existing topic, select that topic from the resource list, and then skip to step 7.3.    Choose Create topic.4.    For Name, enter a topic name.5.    For Display name, enter a display name.6.    Choose Create topic.7.    Note your topic's Amazon Resource Name (ARN) for later use.8.    Choose Create subscription.9.    For Topic ARN, choose the SNS topic ARN that you noted in step 7.10.    For Protocol, choose Email.11.    For Endpoint, enter your email address.12.    Choose Create subscription.Note: You'll receive a subscription confirmation email from Amazon SNS from the email address that you entered in step 11.13.    From the confirmation email message, choose Confirm subscription.You'll see a subscription confirmation message in your browser.Create an AWS Identity and Access Management (IAM) policy that allows Lambda to publish to the SNS topic for email alertsNote: This policy also allows Lambda to write to Amazon CloudWatch Logs.1.    Open the IAM console.2.    In the navigation pane, choose Policies.3.    Choose Create policy.4.    Choose the JSON tab, and then enter the following code into the JSON code editor:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sns:Publish" ], "Resource": [ "{awsExampleSNSTopicARN}" ] }, { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:*" } ]}Note: Replace awsExampleSNSTopicARN with the ARN for the SNS topic that you created for email alerts.5.    Choose Review policy.6.    For Name, enter a policy name.7.    Choose Create policy.Attach the IAM policy to an IAM role for Lambda1.    Open the IAM console.2.    In the navigation pane, choose Roles.3.    Choose Create role.4.    In the Select type of trusted entity section, choose AWS service.5.    In the Choose the service that will use this role section, choose Lambda.6.    Choose Next: Permissions.7.    In the search bar, enter the name of the policy that you created earlier, and then select that policy.8.    Choose Next:Tags, and then create an optional IAM tag.9.    Choose Next: Review.10.    For Role name, enter a role name.11.    Choose Create role.Create a Lambda function and assign the IAM role that you created1.    Open the Lambda console.2.    Choose Create function.3.    Choose Author from scratch.4.    For Name, enter a name for the Lambda function.5.    For Runtime, choose Node.js 10.x.6.    For Execution role, choose Use an existing role.7.    For Existing role, choose the IAM role that you created earlier.8.    Choose Create function.Create a second SNS topic and subscription to notify the Lambda function1.    Open the Amazon SNS console.2.    In the navigation pane, choose Topics.3.    Choose Create topic.4.    For Name, enter a topic name.5.    For Display name, enter a display name.6.    Choose Create topic.7.    Note the ARN of your topic for later use.8.    Choose Create subscription.9.    For Topic ARN, choose the SNS topic ARN that you noted in step 7.10.    For Protocol, choose AWS Lambda.11.    For Endpoint, choose the Lambda function that you created.12.    Choose Create subscription.Update the Lambda function with a script that publishes to the SNS topic1.    Open the Lambda console.2.    In the navigation pane, choose Functions, and then select the function that you created earlier.3.    In the Function code section, enter the following script into the editor pane:topic_arn = "{awsExampleSNSTopicARN}";var AWS = require('aws-sdk'); AWS.config.region_array = topic_arn.split(':'); // splits the ARN into an array AWS.config.region = AWS.config.region_array[3]; // makes the 4th variable in the array (will always be the region)// #################### BEGIN LOGGING ########################console.log(topic_arn); // just for logging to the that the var was parsed correctlyconsole.log(AWS.config.region_array); // to see if the SPLIT command workedconsole.log(AWS.config.region_array[3]); // to see if it got the region correctlyconsole.log(AWS.config.region); // to confirm that it set the AWS.config.region to the correct region from the ARN// #################### END LOGGING (you can remove this logging section) ########################exports.handler = function(event, context) { const message = event.Records[0].Sns.Message; if (message.indexOf("ROLLBACK_IN_PROGRESS") > -1) { var fields = message.split("\n"); subject = fields[11].replace(/['']+/g, ''); send_SNS_notification(subject, message); }};function send_SNS_notification(subject, message) { var sns = new AWS.SNS(); subject = subject + " is in ROLLBACK_IN_PROGRESS"; sns.publish({ Subject: subject, Message: message, TopicArn: topic_arn }, function(err, data) { if (err) { console.log(err.stack); return; } console.log('push sent'); console.log(data); });}Note: Replace awsExampleSNSTopicARN with the ARN for the SNS topic that you created for email alerts.4.    In the Designer view, in the Add triggers section, choose SNS.5.    In the Configure triggers section, for SNS topic, choose the SNS topic that you created to notify the Lambda function.6.    Choose Add.7.    Choose Save.Set your AWS CloudFormation stack to send all notifications to the SNS topic that notifies the Lambda function1.    Open the AWS CloudFormation console, and follow the steps in the setup wizard to create a stack.2.    For Notification options, choose Existing Amazon SNS topic.3.    Choose the SNS topic that you created to notify the Lambda function.4.    Complete the steps in the setup wizard to create your stack.If you're using the AWS Command Line Interface (AWS CLI) to create a stack, then use the --notification-arns command. This command sends notifications to the SNS topic that notifies the Lambda function. Then, set the value of the SNS topic to the SNS ARN.Related informationTemplate snippetsTemplate anatomyAWS CloudFormation best practicesFollow"
https://repost.aws/knowledge-center/cloudformation-rollback-email
How do I use Amazon AWS Directory Service and Amazon Connect with Microsoft Active Directory?
I want to configure Amazon AWS Directory Service to manage users of Amazon Connect.
"I want to configure Amazon AWS Directory Service to manage users of Amazon Connect.ResolutionFollow these steps to configure AWS Directory Service with Amazon Connect:Create an Active Directory in AWS Directory Service.Create an Amazon Elastic Compute Cloud (Amazon EC2) Windows instance so that you can create and manage users for the directory.Create a new user in the directory.Create an Amazon Connect instance with the newly created directory.Add and manage new users in Amazon Connect.AWS Directory Service offers three types of services. All three types work with Amazon Connect for user management. For more information on AWS Directory Services, see What is AWS Directory Service?PrerequisitesMake sure that Amazon Virtual Private Cloud (Amazon VPC) is set up with these options:There are at least two subnets, with both subnets in different Availability Zones.The VPC is using default hardware tenancy.The VPC is not using IP addresses in the 198.18.0/15 address space.To configure the Amazon VPC, see Get started with Amazon VPC. Note: For more information on the earlier prerequisites, see AWS Managed Microsoft AD prerequisites. For information on the prerequisites related to the other AWS Directory Service types, see AD Connector prerequisites or Simple AD prerequisites.Create an Active Directory in AWS Directory ServiceFollow these steps to configure AWS Managed Microsoft AD:In the AWS Directory Service Management console, choose Set up directory.Complete the form as follows:For Directory type, select AWS Managed Microsoft AD.For Edition, select your edition, Standard or Enterprise.For Directory DNS name, enter any fully qualified domain name of your choice.For Directory NetBIOS name (optional), enter a short identifier for the domain.For Directory description (optional), provide a short description of the directory (no more than 128 words).For Admin password, enter a password. Save this password because you use it later to log in to the EC2 instance.For Confirm password, re-enter the password.Choose Next to continue.Select the VPC and subnets that you set up as prerequisites. Then, choose Next to continue.Choose Create directory. This might take some time to generate and initialize. For more information about [thing], see Create your AWS Managed Microsoft AD directory.Log in to the directory as an admin with your new credentials:User name: AdminPassword: [The password that you created in Step 2.]Create an Amazon EC2 instance for Microsoft Windows ServerTo manage users in the new directory that you created, you must create an EC2 instance. Later you will use this EC2 instance to add, modify, or delete users.Creating an instance is a three-step process:Set up the IAM role for the instanceThe EC2 instance requires an AWS Identity and Access Management (IAM) role so that it can communicate with the Directory Service. Follow these steps to create an IAM role for the instance:In the IAM console, choose Roles, Create role.For Trusted entity, select EC2. Then, choose Next.Configure permissions and service policies as follows:For Permissions policy, select both AWS managed policies:AmazonSSMManagedInstanceCoreAmazonSSMDirectoryServiceAccessThen, choose Next.Create a security group for the instanceYou must create a security group for an EC2 instance. You use this security group later when you create the instance.In the EC2 console, choose Security groups, Create security group.Enter the name of the security group. For example, you might use AWSDirectoryEC2SecurityGroup.Select the same VPC where you created AWS Managed Microsoft AD.For Inbound rules, choose Add rule. Then, enter the IP range for Remote Desktop Protocol (RDP) traffic as follows:Type: RDPSource: IP_range_to_allow_the_RDP_trafficNote: Replace IP_range_to_allow_the_RDP_traffic with your required range.For Outbound rules, choose Add rule, and then enter as follows:Type: All trafficDestination: Choose Custom, Select the security group of the directory.Note: The directory's security group has the following name format by default:directoryid_controllers. For example, if the directory id is d-9x1234abcd, then the security group is d-9x1234abcd_controllers.Choose Create security group to create the security group.Create an EC2 instanceFollow these steps to create an EC2 instance:In the EC2 console, choose Instances, Launch instances.Name the EC2 instance (optional).Then, configure the following:For Application and OS Images, select Windows.For Key pair, select the key pair from the dropdown list if it was already created, or create new key pair for the instance.For Network settings, select the same VPC where you created the AWS Managed Microsoft AD directory.For Subnets, choose one of the public subnets associated with the directory. Turn on Auto-assign public IP.For Firewall, (security group), select the security group that you created earlier.For example, AWSDirectoryEC2SecurityGroup.Important: You can select any security group. However, you must edit the security group associated with the directory so that the network can connect between the EC2 instance and the directory.Expand the Advanced details tab.For Domain join directory, select the directory that you created earlier.For the IAM role profile select AWSDirectoryEC2Role, the role that you created earlier.Leave the rest of the configuration as-is, and then choose Launch instance. See Join an EC2 instance to your AWS Managed Microsoft AD Directory for more information on creating an EC2 instance.For more information on joining an EC2 instance with AD Connector, see Seamlessly join a Windows EC2 instance. For more information on joining an EC2 instance with Simple AD, see Seamlessly join a Windows EC2 instance.Create a new user in the directoryFollowing the launch, open a remote session in the instance to configure the directory and create a user in it.Use your directory credentials to log in to the EC2 instance:User name: Admin@Domain.Note: For example, if the directory’s domain is article.awssupport.com, then:User name is Admin@article.awssupport.com.Password is the Admin password that you created earlier.Open a remote session. To open a remote session with your credentials:Make sure that the security group associated with the instance allows RDP traffic. See Connect to your Windows instance using RDP for more information.The EC2 instance seamlessly joins with the Directory Service. If the problem persists, try configuring a manual join for the EC2 instance. See Manually join a Windows instance for more information.After the session becomes active, follow these steps to create a new user:Install Active Directory tools using the steps described in Install the Active Directory Administration Tools on Windows Server 2012 through Windows Server 2019.To open the Windows Administrative Tools, on the Windows EC2 instance remote session, choose Start.Open Active Directory Users and Computers. The new window shows your directory domain.From the dropdown list, choose Domain, Organization Unit, Users. For example, if domain is article.awssupport.com, then choose article.awssupport.com, Article, Users.Note: The screen shows Admin as the user. Create a second user with a different name because Admin is a reserved word in Amazon Connect.From the dropdown list, choose Action, New, UserEnter the First name, Last name, and the user’s Logon name. Then, choose Next to continue.Create a password, and then confirm it.Uncheck User must change password on next Logon and check Password never expires. Then, choose Next.Choose Finish.See Create a user for more information on the steps for creating a new user in the directory. Now you’re ready to create an instance in Amazon Connect.Create an Amazon Connect instanceFollow these steps to create the instance:Log in to the Amazon Connect console with your AWS credentials.Choose Add an instance.Note: Choose Get Started if this is the first instance in Amazon Connect.For identity management, select Link to existing directory. Select the directory that you set up earlier from the dropdown list.Enter the Access URL. Then, choose Next to continue.For Add administrator, enter a user name (for example, Jane). Don't enter the complete user’s login name, including the directory domain.Leave the rest of the configuration as-is, and then choose Create instance.To log in to the instance, use the user name and password.Add and manage new users in Amazon Connect instanceEarlier you created a new user in the directory. Now add the user to the Amazon Connect instance::Launch your Connect instance dashboard with the Connect instance credentials.In the Amazon Connect instance dashboard, choose Users, User management.Choose Add new user from the top-right corner.Select the user that you created from the list of users.Select security profile, routing profile, and phone configuration for the new user. Choose Save from the top-right corner.Test the new user's credentials. Log in to the Amazon Connect instance with the user's selected security profile permissions. See Add users to Amazon Connect for more information.Common issues during setupThis section provides guidelines to troubleshoot common problems that you might encounter during setup.Can't sign in to the Windows EC2 instance with the directory’s Admin credentials.Consider the following:The subnets of the EC2 instance and the directory might not match. Make sure that the EC2 instance is in one of the subnets that belongs to the directory.The security group of either the EC2 instance or the directory doesn't allow traffic. When you create a directory, the system creates a default security group associated with the directory to allow traffic. Use this security group to avoid any security group-related issue.Sometimes credentials don't work if the EC2 seamless join doesn't work. Try manually joining the EC2 instance with the DirectoryThe Administrative tools aren't present in the EC2 instance.If you need Administrative tools for the EC2 instance, follow the steps in Install the Active Directory Administration Tools on Windows Server 2012 through Windows Server 2019 to install them.The Directory domain in the Windows EC2 instance isn't available in the Active Directory’s Users and Computers screen.Confirm that the directory and the EC2 instance are connected. Make sure that you created the instance according to the steps given in Join an EC2 instance to your AWS Managed Microsoft AD directory.Make sure that the directory’s user has Admin status and is logged in to the EC2 instance.Follow"
https://repost.aws/knowledge-center/aws-directory-service-for-amazon-connect-users
How do I use API Gateway as a proxy for another AWS service?
I want to use Amazon API Gateway as a proxy for another AWS service and integrate other services with API Gateway.
"I want to use Amazon API Gateway as a proxy for another AWS service and integrate other services with API Gateway.Short descriptionAWS service APIs are essentially REST APIs that you can make an HTTPS request to. You can integrate many AWS services with API Gateway, but the setup and mapping vary based on the particular service API.To integrate another service with API Gateway, build an HTTPS request from API Gateway to the service API. This way, all request parameters are correctly mapped.This article describes an example setup for integrating the Amazon Simple Notification Service (Amazon SNS) Publish API with API Gateway. Use this example as an outline for integrating other services.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.Set up the required tools and resourcesConfigure your environment and create all the AWS resources required for your use case. For the Amazon SNS example setup, do the following:Install the AWS CLI.Create an Amazon SNS topic. Note the topic's Amazon Resource Name (ARN). Use this information in the next step, and later in this setup.Create a subscription to the topic.Open the AWS Identity and Access Management (IAM) console and then create an AWS service proxy execution role. Note the role's ARN for later in the setup. This IAM role gives API Gateway permissions as a trusted entity to assume the service and perform the API action that you're integrating. For the Amazon SNS example setup, allow the action sns:Publish. For more information, see API Gateway permissions model for invoking an API.Create an API Gateway REST API with a test resource. For more information and examples, see Amazon API Gateway tutorials and workshops.Note: Optionally, import the REST API using the following sample OpenAPI 2.0 (Swagger) definition. This option preconfigures the settings for the Amazon SNS example setup. Be sure to replace arn:aws:iam::account-id:role/apigateway-sns-role with your IAM role's ARN. Replace region with the AWS Region where you want to create your REST API.{ "swagger": "2.0", "info": { "version": "2019-10-09T14:10:24Z", "title": "aws-service-integration" }, "basePath": "/dev", "schemes": [ "https" ], "paths": { "/test": { "post": { "produces": [ "application/json" ], "parameters": [ { "name": "Message", "in": "query", "required": true, "type": "string" }, { "name": "TopicArn", "in": "query", "required": true, "type": "string" } ], "responses": { "200": { "description": "200 response", "schema": { "$ref": "#/definitions/Empty" } } }, "x-amazon-apigateway-integration": { "credentials": "arn:aws:iam::account-id:role/apigateway-sns-role", "uri": "arn:aws:apigateway:region:sns:action/Publish", "responses": { "default": { "statusCode": "200" } }, "requestParameters": { "integration.request.querystring.TopicArn": "method.request.querystring.TopicArn", "integration.request.querystring.Message": "method.request.querystring.Message" }, "passthroughBehavior": "when_no_match", "httpMethod": "POST", "type": "aws" } } } }, "definitions": { "Empty": { "type": "object", "title": "Empty Schema" } }}Get an example HTTPS requestAn example HTTPS request from the service API that you're integrating can help you correctly map the request parameters in API Gateway. To get an example HTTPS request, do one of the following:Check for examples in the API documentation. For the Amazon SNS Publish API, you can refer to the service's API Reference for an example request:https://sns.us-west-2.amazonaws.com/?Action=Publish&TargetArn=arn%3Aaws%3Asns%3Aus-west-2%3A803981987763%3Aendpoint%2FAPNS_SANDBOX%2Fpushapp%2F98e9ced9-f136-3893-9d60-776547eafebb&Message=%7B%22default%22%3A%22This+is+the+default+Message%22%2C%22APNS_SANDBOX%22%3A%22%7B+%5C%22aps%5C%22+%3A+%7B+%5C%22alert%5C%22+%3A+%5C%22You+have+got+email.%5C%22%2C+%5C%22badge%5C%22+%3A+9%2C%5C%22sound%5C%22+%3A%5C%22default%5C%22%7D%7D%22%7D&Version=2010-03-31&AUTHPARAMS- or -Generate it from an API call. Use the AWS CLI to call the service API, and then analyze the output. Determine the corresponding AWS CLI command for the service API that you're integrating, and then run a test request with the --debug option.Tip: Check the AWS CLI Command Reference to find the corresponding AWS CLI command.For the Amazon SNS example setup, run this command:Note: Replace arn:aws:sns:us-east-1:123456789012:test with your Amazon SNS topic's ARN.$ aws sns publish --topic-arn arn:aws:sns:us-east-1:123456789012:test --message "hi" --debugThe command output contains the HTTPS request and the headers that are passed. Here's an example of what to look for:2018-11-22 11:56:39,647 - MainThread - botocore.client - DEBUG - Registering retry handlers for service: sns2018-11-22 11:56:39,648 - MainThread - botocore.hooks - DEBUG - Event before-parameter-build.sns.Publish: calling handler <function generate_idempotent_uuid at 0x11093d320>2018-11-22 11:56:39,649 - MainThread - botocore.endpoint - DEBUG - Making request for OperationModel(name=Publish) (verify_ssl=True) with params: {'body': {'Action': u'Publish', u'Message': u'hello', 'Version': u'2010-03-31', u'TopicArn': u'arn:aws:sns:us-east-1:123456789012:test'}, 'url': u'https://sns.us-east-1.amazonaws.com/', 'headers': {'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'User-Agent': 'aws-cli/1.15.74 Python/2.7.14 Darwin/16.7.0 botocore/1.9.23'}, 'context': {'auth_type': None, 'client_region': 'us-east-1', 'has_streaming_input': False, 'client_config': <botocore.config.Config object at 0x1118437d0>}, 'query_string': '', 'url_path': '/', 'method': u'POST'}2018-11-22 11:56:39,650 - MainThread - botocore.hooks - DEBUG - Event request-created.sns.Publish: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x111843750>>2018-11-22 11:56:39,650 - MainThread - botocore.hooks - DEBUG - Event choose-signer.sns.Publish: calling handler <function set_operation_specific_signer at 0x11093d230>2018-11-22 11:56:39,650 - MainThread - botocore.auth - DEBUG - Calculating signature using v4 auth.2018-11-22 11:56:39,651 - MainThread - botocore.auth - DEBUG - CanonicalRequest:POST/content-type:application/x-www-form-urlencoded; charset=utf-8host:sns.us-east-1.amazonaws.comx-amz-date:20181122T062639Zcontent-type;host;x-amz-dateIn this example, the request is sent as a POST HTTP method.Create a method for your API Gateway APIIn the API Gateway console, on the APIs pane, choose the name of your API.In the Resources pane, choose a resource. For the Amazon SNS example setup, choose the test resource that you created.Choose Actions, and then choose Create Method.In the dropdown list, choose the method used by your service API in the example HTTPS request. (For the Amazon SNS example setup, choose POST.) Then, choose the check mark icon.On the Setup pane, do the following:For Integration type, choose AWS Service.For AWS Region, choose the AWS Region of the resource associated with the service API that you're integrating. For the Amazon SNS example setup, choose the Region of your SNS topic.For AWS Service, choose the service that you're integrating with API Gateway. For example, Simple Notification Service (SNS).(Optional) For AWS Subdomain, enter the subdomain used by the AWS service. Check the service's documentation to confirm the availability of a subdomain. For the Amazon SNS example setup, leave it blank.For HTTP method, choose the method that corresponds to the AWS service API that you're integrating. For the Amazon SNS example setup, choose POST.For Action Type, if the service API that you're integrating is a supported action, choose Use action name. Check the service's API reference for a list of supported actions. For Amazon SNS, see Actions.For Action, enter the name of the service API. For the Amazon SNS example setup, enter Publish.-or-For Action Type, if the AWS service API expects a resource path in your request, choose Use path override. For example, for the Amazon Polly ListLexicons API, enter /v1/lexicons for Path override (optional).For Execution role, enter the ARN of the IAM role that you created.(Optional) For Content Handling and Use Default Timeout, make changes as needed for your use case. For the Amazon SNS example setup, don't change these settings.Choose Save.Create parameters for the method requestDetermine the required and optional request parameters for the service API that you're integrating. To identify these parameters, refer to the example HTTPS request that you got earlier, or refer to the API Reference for the service API. For example, see Publish.In the API Gateway console, on the Method Execution pane for your API Gateway API's method, choose Method Request.(Optional) On the Method Request pane, for Request Validator, choose a request validator, body, and headers if you want to validate the query string parameters.Expand URL Query String Parameters.Choose Add query string.For Name, enter the name of a request parameter for the service API that you're integrating.Choose the check mark icon (Create a new query string).If the parameter is required, select the check box under Required.Repeat steps 4-7 for all request parameters that you want to include. For the Amazon SNS example setup, create a parameter named T****opicArn and another named Message.For more information, see Set up a method using the API Gateway console.Note: For some service APIs, you must send required headers and a body in the integration request in addition to the required parameters. You can create the headers and body on the Integration Request pane under HTTP Request Headers and Request Body.For example, if you're integrating the Amazon Rekognition ListCollections API, create the header X-Amz-Target: RekognitionService.ListCollections. The request looks like this:POST https://rekognition.us-west-2.amazonaws.com/ HTTP/1.1 Host: rekognition.us-west-2.amazonaws.com Accept-Encoding: identity Content-Length: 2 X-Amz-Target: RekognitionService.ListCollections X-Amz-Date: 20170105T155800Z User-Agent: aws-cli/1.11.25 Python/2.7.9 Windows/8 botocore/1.4.82 Content-Type: application/x-amz-json-1.1 Authorization: AWS4-HMAC-SHA256 Credential=XXXXXXXX/20170105/us-west-2/rekognition/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-target, Signature=XXXXXXXX {}If you're integrating the Amazon Simple Queue Service (Amazon SQS) SendMessage API, map the request body using the mapping expression method.request.body.JSONPath_EXPRESSION. (Replace JSONPath_EXPRESSION with a JSONPath expression for a JSON field of the body of the request.) In this example, a request looks similar to the following:{'url_path': '/', 'query_string': '', 'method': 'POST','headers': {'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'User-Agent': 'aws-cli/1.16.81 Python/3.6.5 Darwin/18.7.0 botocore/1.12.183'}, 'body': {'Action': 'SendMessage', 'Version': '2012-11-05', 'QueueUrl': 'https://sqs.ap-southeast-2.amazonaws.com/123456789012/test01', 'MessageBody': 'Hello'}, 'url': 'https://ap-southeast-2.queue.amazonaws.com/', 'context': {'client_region': 'ap-southeast-2', 'client_config': <botocore.config.Config object at 0x106862da0>, 'has_streaming_input': False, 'auth_type': None}}Create parameters for the integration requestMap the parameters that you created for the method request to parameters for the integration request.In the API Gateway console, go back to the Method Execution pane for your API Gateway API's method, and then choose Integration Request.On the Integration Request pane, expand URL Query String Parameters.Choose Add query string.For Name, enter the name of a request parameter for the service API that you're integrating.Note: The name is case-sensitive and must appear exactly as expected by the service API.For Mapped from, enter method.request.querystring.param_name. Replace param_name with the name of the corresponding parameter that you created for the method request. For example, method.request.querystring.TopicArn.Choose the check mark icon (Create).Repeat steps 3-6 to create parameters for the integration request that correspond to each of the parameters that you created for the method request.Note: If you created required headers and a body for the method request, map them to the integration request, too. Create them on the Integration Request pane under HTTP Headers and Mapping Templates.For more information, see Set up an API integration request using the API Gateway console.(Optional) Check your integration configurationTo confirm that your integration setup looks as you expect, you can run the AWS CLI get-integration command to check the configuration similar to the following:$ aws apigateway get-integration --rest-api-id 1234123412 --resource-id y9h6rt --http-method POSTFor the Amazon SNS example setup, the output looks similar to the following:{ "integrationResponses": { "200": { "responseTemplates": { "application/json": null }, "statusCode": "200" } }, "passthroughBehavior": "WHEN_NO_MATCH", "timeoutInMillis": 29000, "uri": "arn:aws:apigateway:us-east-2:sns:action/Publish", "httpMethod": "POST", "cacheNamespace": "y9h6rt", "credentials": "arn:aws:iam::1234567890:role/apigateway-sns-role", "type": "AWS", "requestParameters": { "integration.request.querystring.TopicArn": "method.request.querystring.TopicArn", "integration.request.querystring.Message": "method.request.querystring.Message" }, "cacheKeyParameters": []}In the API Gateway console, go back to the Method Execution pane for your API Gateway API's method, and then choose TEST.On the Method Test pane, do the following:For Query Strings, enter a query string that includes request parameters and values for them. For the Amazon SNS example setup, enter TopicArn= arn:aws:sns:us-east-1:123456789012:test&Message="Hello". Replace arn:aws:sns:us-east-1:123456789012:test with your Amazon SNS topic's ARN.For Headers and Request Body, if you created these for your setup, enter the header names and request body JSON.Choose Test. A response appears in the Method Test pane. If the response is successful, you see Status: 200. For the Amazon SNS example setup, a successful response includes a MessageId in the response body.For more information, see Use the API Gateway console to test a REST API method.Deploy your REST API.Test your API using any tool that you prefer.Related informationTutorial: Build an API Gateway REST API with AWS integrationSet up REST API methods in API GatewaySetting up REST API integrationsSet up request and response data mappings using the API Gateway consoleFollow"
https://repost.aws/knowledge-center/api-gateway-proxy-integrate-service
How do I use VM Import/Export to export a VM based on my Amazon Machine Image (AMI)?
I want to export a copy of my Amazon Machine Image (AMI) as a virtual machine (VM) to deploy in my on-site virtualization environment. How do I use VM Import/Export to do that?
"I want to export a copy of my Amazon Machine Image (AMI) as a virtual machine (VM) to deploy in my on-site virtualization environment. How do I use VM Import/Export to do that?Short descriptionYou can use the AWS Command Line Interface (AWS CLI) to start an image export task using VM Import/Export. Then, a copy of your Amazon Machine Image (AMI) is exported as a VM file and written to an Amazon Simple Storage Service (Amazon S3) bucket. You can use the exported VM to deploy a new, standardized instance in your on-site virtualization environment. You can export most AMIs to Citrix Xen, Microsoft Hyper-V, or VMware vSphere.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Before starting the following resolution steps, do the following:Review the considerations and limitations of VM Import/Export to verify that VM Import/Export supports your AMI.If you sign in as an AWS Identity and Access Management (IAM) user to use VM Import/Export, be sure to have the required permissions for IAM users in your policy.Resolution1.    Create an Amazon Elastic Block Store (Amazon EBS) backed AMI from the EC2 instance that you want to export.For Linux, see Create an Amazon EBS-backed Linux AMI.For Windows, see Create a custom Windows AMI.2.    Install the AWS CLI on a client machine and configure it with the AWS credentials generated for your IAM user.3.    Create a new S3 bucket in the same AWS Region as the AMI that you plan to export.4.     Create the required service role. As a prerequisite, make sure to enable AWS Security Token Service (AWS STS) in the Region where you're using VM Import/Export.To create the service role, first create a file named trust-policy.json on your computer and then add the following policy to the file:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "vmie.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals":{ "sts:Externalid": "vmimport" } } } ]}Run the create-role command to create a role named vmimport using the trust-policy.json file to grant VM Import/Export access to the role:aws iam create-role --role-name vmimport --assume-role-policy-document "file://C:\import\trust-policy.json"Note: In the preceding example, make sure to specify the full path to the location of the trust-policy.json file that you created. Be sure to include the file:// prefix.Create another file named role-policy.json on your computer and add the following policy to the file. Replace my-export-bucket with your S3 bucket name.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket", "s3:PutObject", "s3:GetBucketAcl" ], "Resource": [ "arn:aws:s3:::my-export-bucket", "arn:aws:s3:::my-export-bucket/*" ] }, { "Effect": "Allow", "Action": [ "ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage", "ec2:Describe*" ], "Resource": "*" } ]}Use the put-role-policy command to attach the role-policy.json policy to the vmimport role that you created previously:aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document "file://C:\import\role-policy.json"In the preceding example, be sure to specify the full path to the location of the role-policy.json file.Note: Future updates to VM Import/Export might require additional permissions in the vmimport role. Refer to the example policy in the required service role documentation for the most up-to-date example of the required permissions.5.    From the client machine where you installed the AWS CLI, run the AWS CLI command export-image to start the export image task:aws ec2 export-image --image-id ami-id --disk-image-format VMDK --s3-export-location S3Bucket=my-export-bucket,S3Prefix=exports/Note: In the preceding example, replace ami-id with your AMI ID. Choose the desired disk image format (VMDK, RAW or VHD). Replace my-export-bucket with your S3 bucket name. The exported file is written to your specified S3 bucket using the S3 key prefixexport-ami-id.format (for example, my-export-bucket/exports/export-ami-1234567890abcdef0.vmdk). You can add a prefix to the exported file.If the request is successful, the export-image command output returns details about the task including an export image task ID, as shown in the following example:{ "DiskImageFormat": "vmdk", "ExportImageTaskId": "export-ami-1234567890abcdef0" "ImageId": "ami-1234567890abcdef1", "RoleName": "vmimport", "Progress": "0", "S3ExportLocation": { "S3Bucket": "my-export-bucket", "S3Prefix": "exports/" }, "Status": "active", "StatusMessage": "validating"}6.    To check the status of your export image task, run the AWS CLI command describe-export-image-tasks.Exampleaws ec2 describe-export-image-tasks --export-image-task-id export-ami-idNote: In the preceding example, replace export-ami-id with the export image task ID from the export-image command output.The describe-export-image-tasks command output returns details about the progress and overall status of the task. The following example output is for an export image task that is in an active status and in progress:{ "ExportImageTasks": [ { "ExportImageTaskId": "export-ami-1234567890abcdef0" "Progress": "21", "S3ExportLocation": { "S3Bucket": "my-export-bucket", "S3Prefix": "exports/" }, "Status": "active", "StatusMessage": "updating" } ]}7.    When the status of your export image task changes to "completed", the exported file is ready in your S3 bucket as an object. The following example output shows a completed export image task. The resulting exported file in Amazon S3 is my-export-bucket/exports/export-ami-1234567890abcdef0.vmdk.{ "ExportImageTasks": [ { "ExportImageTaskId": "export-ami-1234567890abcdef0" "S3ExportLocation": { "S3Bucket": "my-export-bucket", "S3Prefix": "exports/" }, "Status": "completed" } ]}8.    Access your S3 bucket using the Amazon S3 console to locate and download the object.Related informationExport from an AMICreating an IAM user in your AWS accountProgrammatic access (access key ID and secret access key)Amazon Machine Images (AMI)Troubleshooting VM Import/ExportFollow"
https://repost.aws/knowledge-center/ec2-export-vm-using-import-export
How do I graph older metrics that aren't listed on the CloudWatch console?
I want to graph older metrics that are no longer listed on the Amazon CloudWatch console. How can I do this?
"I want to graph older metrics that are no longer listed on the Amazon CloudWatch console. How can I do this?Short descriptionIf data isn't published for more than 14 days, then a metric no longer appears on the CloudWatch console and can't be retrieved using ListMetrics. You can no longer search for this metric to graph it. To view the metric, you must manually provide the metric source.The example in this article shows you how to view metrics for a terminated Amazon Elastic Compute Cloud (Amazon EC2) instance.ResolutionNote: Before you begin, check that you have the correct NameSpace, Metric Name and all the Dimensions associated with that metric. A metric is a unique combination of Namespace, MetricName, and Dimension. Note that metric values are case sensitive.Open the CloudWatch console.In the navigation pane, in the Metrics section, choose All metrics.Choose Source, and then enter the JSON block for the metric that you want to graph. This example graphs the CPUUtilization metric of an instance that was terminated 15 days ago:{ "view": "timeSeries", "metrics": [ [ "AWS/EC2", "CPUUtilization", "InstanceId", "i-abc123456789" ] ], "region": "us-east-1"}If there are multiple dimensions in the metric that you want to graph, be sure to include them in the metrics section using this format:[ "NameSpace", "MetricName", "Dimension Key", "Dimension Value" ]To graph multiple metrics, add a comma after the end bracket if there isn't one already. Then, add this to the metrics section:["NameSpace1", "MetricName1", "Dimension Key1", "Dimension Value1"], ["NameSpace2", "MetricName2", "Dimension Key2, "Dimension Value2"]After you add the JSON to graph your metric, chooseGraph metrics. You can now adjust theTime Range,Period andStatistic.Related informationAdding old metrics to a graphFollow"
https://repost.aws/knowledge-center/cloudwatch-graph-older-metrics
Why did my Amazon Redshift cluster reboot outside of the maintenance window?
My Amazon Redshift cluster restarted outside the maintenance window. Why did my cluster reboot?
"My Amazon Redshift cluster restarted outside the maintenance window. Why did my cluster reboot?Short descriptionAn Amazon Redshift cluster is restarted outside of the maintenance window for the following reasons:An issue with your Amazon Redshift cluster was detected.A faulty node in the cluster was replaced.To be notified about any cluster reboots outside of your maintenance window, create an event notification for your Amazon Redshift cluster.ResolutionAn issue with your Amazon Redshift cluster was detectedHere are some common issues that can trigger a cluster reboot:An out-of-memory (OOM) error on the leader node: A query that runs on a cluster that's upgraded to a newer version can cause an OOM exception, initiating a cluster reboot. To resolve this, consider rolling back your patch or failed patch.An OOM error resulting from an older driver version: If you're working on an older driver version and your cluster is experiencing frequent reboots, download the latest JDBC driver version. It's a best practice to test the driver version in your development environment before you use it in production.Health check queries failure: Amazon Redshift constantly monitors the availability of its components. When a health check fails, Amazon Redshift initiates a restart to bring the cluster to a healthy state as soon as possible. Doing so reduces the amount of downtime.Prevent health check query failuresThe most common health check failures happen when the cluster has long-running open transactions. When Amazon Redshift cleans up memory associated with long running transactions, that process can cause the cluster to lock up. To prevent these situations, it's a best practice to monitor unclosed transactions using the following queries.For long open connections, run the following example query:select s.process as process_id, c.remotehost || ':' || c.remoteport as remote_address, s.user_name as username, s.db_name, s.starttime as session_start_time, i.starttime as start_query_time, datediff(s,i.starttime,getdate())%86400/3600||' hrs '|| datediff(s,i.starttime,getdate())%3600/60||' mins ' || datediff(s,i.starttime,getdate())%60||' secs 'as running_query_time, i.text as queryfrom stv_sessions sleft join pg_user u on u.usename = s.user_nameleft join stl_connection_log c on c.pid = s.process and c.event = 'authenticated'left join stv_inflight i on u.usesysid = i.userid and s.process = i.pidwhere username <> 'rdsdb'order by session_start_time desc;For long-open transactions, run the following example query:select *,datediff(s,txn_start,getdate())/86400||' days '||datediff(s,txn_start,getdate())%86400/3600||' hrs '||datediff(s,txn_start,getdate())%3600/60||' mins '||datediff(s,txn_start,getdate())%60||' secs' from svv_transactionswhere lockable_object_type='transactionid' and pid<>pg_backend_pid() order by 3;After you have this information, you can review the transactions that are still opened by running the following query:select * from svl_statementtext where xid = <xid> order by starttime, sequence)To terminate idle sessions and free up the connections, use the PG_TERMINATE_BACKEND command.A faulty node in the Amazon Redshift cluster was replacedEach Amazon Redshift node runs on a separate Amazon Elastic Compute Cloud (Amazon EC2) instance. A failed node is an instance that fails to respond to any heartbeat signals sent during the monitoring process. Heartbeat signals periodically monitor the availability of compute nodes in your Amazon Redshift cluster.These automated health checks try to recover the Amazon Redshift cluster when an issue is detected. When Amazon Redshift detects any hardware issues or failures, nodes are automatically replaced in the following maintenance window. Note that in some cases, faulty nodes must be replaced immediately to make sure that your cluster is performing properly.Here are some of the common causes of failed cluster nodes:EC2 instance failure: When the underlying hardware of an EC2 instance is found to be faulty, the faulty node is then replaced to restore cluster performance. EC2 tags the underlying hardware as faulty if there is a lack of response or failure to pass any automated health checks.Node replacement due to a faulty disk drive of a node: When an issue is detected with the disk on a node, Amazon Redshift either replaces the disk or restarts the node. If the Amazon Redshift cluster fails to recover, the node is replaced or scheduled to be replaced.Internode communication failure: If there is a communication failure between the nodes, then the control messages aren't received by a particular node at the specified time. Internode communication failures are caused by an intermittent network connection issue or an issue with the underlying host.Discovery Timeout: An automatic node replacement is triggered if a node or cluster cannot be reached within the specified time.Out-of-memory (OOM) exception: Heavy load on a particulate node can cause OOM issues, triggering a node replacement.Creating Amazon Redshift event notificationsTo identify the cause of your cluster reboot, create an Amazon Redshift event notification, subscribing to any cluster reboots. The event notifications also notifies you if the source was configured.Related informationAmazon Redshift event categories and event messagesManaging event notifications using the Amazon Redshift consoleManaging event notifications using the Amazon Redshift CLI and APIFollow"
https://repost.aws/knowledge-center/redshift-reboot-maintenance-window
"I can use my application from a custom origin (EC2 instance or load balancer), but it fails on CloudFront. Why?"
"I'm using an Amazon Elastic Compute Cloud (Amazon EC2) instance or a load balancer as the custom origin for my website or application. I can connect to the custom origin directly, but I can't get the same content from Amazon CloudFront, or CloudFront returns an error. How can I troubleshoot this?"
"I'm using an Amazon Elastic Compute Cloud (Amazon EC2) instance or a load balancer as the custom origin for my website or application. I can connect to the custom origin directly, but I can't get the same content from Amazon CloudFront, or CloudFront returns an error. How can I troubleshoot this?ResolutionTo troubleshoot, try the following steps:Identify the error responseDetermine the HTTP response headers returned by CloudFront by reviewing the network tab on your browser developer tools. Or, use a utility like cURL.If you're receiving an HTTP 502 status code (Bad Gateway) response, the issue is likely from the SSL connection between CloudFront and the origin. For troubleshooting instructions, see HTTP 502 Status Code (Bad Gateway).If you're receiving an HTTP 504 Status Code (Gateway Timeout) response, the issue is likely from access configurations in the security groups or firewall. For troubleshooting instructions, see HTTP 504 Status Code (Gateway Timeout).Verify forwarding based on request headers, cookies, or query stringsIf your application requires certain request headers, cookies, or query strings, update your distribution's cache behaviors to forward the required parameters to the origin. CloudFront might not forward the required parameters in the default settings.For more information, see Caching content based on cookies, Caching content based on query string parameters, and Caching content based on request headers.Check allowed HTTP methodsBy default, CloudFront allows only GET and HEAD HTTP methods. If you're running an application on your origin server and you're accessing your application through CloudFront, review the HTTP methods required for calls to your application. Those HTTP methods must also be allowed on your distribution. For example, if you're running an application to submit a form, you might need to allow the POST method on your distribution. For instructions on how to change allowed HTTP methods on your distribution, see Allowed HTTP methods.Resolve SSL issues between the client and CloudFrontIf you can't access your website or application through CloudFront because of SSL issues, see Why isn't CloudFront serving my domain name over HTTPS?Resolve constant redirection issuesIf you're seeing constant redirection when you try to load your website or application through CloudFront, check the origin configuration on CloudFront. Additionally, check the origin server's redirection policy.In a typical workflow, a client connects to CloudFront, and then CloudFront connects to the origin server. The origin protocol policy of your distribution and the redirection policy of the origin server must be compatible with each other for the workflow to succeed.For example, if your origin server redirects all HTTP requests to HTTPS, and your distribution's origin protocol policy is set to HTTP, then requests are sent in a loop. In this scenario, if the client requests http://d12345.cloudfront.net/example.image, CloudFront makes a request to the origin server to get the content over HTTP. The request lands at the origin server, which then redirects the request from HTTP to HTTPS. The request is routed back to CloudFront using HTTPS, then CloudFront makes a request to the origin again using HTTP, which restarts the request loop.To resolve the constant redirection, use one of the following configurations:Change your CloudFront distribution's origin protocol policy to use only HTTPS. This requires your custom origin server to have a valid SSL certificate installed.If you don't have a valid SSL certificate installed on your origin server, you can remove the redirection policy. Then, you can configure the origin server to accept HTTP requests.Warning: HTTP requests are not recommended for sensitive information, because the communication is in plaintext.Related informationUsing Amazon EC2 or other custom originsFollow"
https://repost.aws/knowledge-center/custom-origin-cloudfront-fails
How do I activate TOTP multi-factor authentication for Amazon Cognito user pools?
I want to activate multi-factor authentication (MFA) for the users of my app. How can I do that with a time-based one-time password (TOTP) token using Amazon Cognito user pools?
"I want to activate multi-factor authentication (MFA) for the users of my app. How can I do that with a time-based one-time password (TOTP) token using Amazon Cognito user pools?Short descriptionTo activate TOTP MFA for your app users, set up TOTP software token MFA for your user pool.Important: Before configuring the TOTP token, keep in mind the following:You must add MFA to your user pool before configuring the TOTP token.TOTP tokens can't be associated with a user until they attempt to log in to your app, or unless they're already authenticated.MFA doesn't support federated users in a user pool.The following is an example of how to set up TOTP MFA using the AWS Command Line Interface (AWS CLI) and Google Authenticator.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you're using the most recent AWS CLI version.1.    Run the following AssociateSoftwareToken command from the AWS CLI to start the MFA token generator setup:aws cognito-idp associate-software-token --access-token eyJraWQiO........ua5Pq3NaA{ "SecretCode": "AETQ6XXMDFYMEPFQQ7FD4HKXXXXAOY3MBXIVRBLRXX3SXLSHHWOA"}2.    Open the Google Authenticator homepage, and then choose Get started.3.    Choose Enter a setup key.4.    For Account name, enter an account name. For example, BobPhone.Note: The account name can be any string identifier.5.    For the Your key text input, copy and paste the secret code that was generated from the AssociateSoftwareToken command that you ran in step one.6.    Choose the Type of key dropdown list, and then select Time based.7.    Verify the software token using the time-based password that appears on the screen and in the following code:aws cognito-idp verify-software-token --access-token eyJraWQiO........ua5Pq3NaA --user-code 269194 --friendly-device-name BobPhone{ "Status": "SUCCESS"}8.    Configure the user's MFA configuration to TOTP MFA using one of the following commands in the AWS CLI:set-user-mfa-preferenceThis command allows users to set their own MFA configuration.Example set-user-mfa-preference commandaws cognito-idp set-user-mfa-preference --software-token-mfa-settings Enabled=true,PreferredMfa=true --access-token eyJraWQiO........ua5Pq3NaAadmin-set-user-mfa-preferenceThis command allows an admin to set a user's MFA configuration.Example admin-set-user-mfa-preference commandaws cognito-idp admin-set-user-mfa-preference --software-token-mfa-settings Enabled=true,PreferredMfa=true --username Bob --user-pool-id us-east-1_1234567899.    Test your setup by authenticating the user in one of these ways:The Amazon Cognito hosted UI.The InitiateAuth or AdminInitiateAuth API calls in the AWS CLI.Note: To authenticate a user with either method, you must have the user's password, user name, and software MFA code.The following examples show how to test user authentication using the AdminInitiateAuth command.Example admin-initiate-auth commandaws cognito-idp admin-initiate-auth --user-pool-id us-east-1_123456789 --client-id 3n4b5urk1ft4fl3mg5e62d9ado --auth-flow ADMIN_USER_PASSWORD_AUTH --auth-parameters USERNAME=Bob,PASSWORD=P@ssw0rdImportant: Make sure to replace the following variables with your own information: user-pool-id, client-id, username, and password. Also, make sure to activate ALLOW_ADMIN_USER_PASSWORD_AUTH flow for the user pool app client by doing the following:1.    Open the Amazon Cognito console.2.    Choose Manage User Pools.3.    Choose your app client and select Show details.4.    Choose Enable username password auth for admin APIs for authentication (ALLOW_ADMIN_USER_PASSWORD_AUTH).5.  Choose Save app client changes.For more information, see Admin authentication flow.Example output from admin-initiate-auth command{ "ChallengeName": "SOFTWARE_TOKEN_MFA", "ChallengeParameters": { "FRIENDLY_DEVICE_NAME": "BobPhone", "USER_ID_FOR_SRP": "Bob" }, "Session": "Xxz6iadwuWJGN4Z7f4ul5p50IHUqITquoaNxxyDvep.......3A6GokZWKeQ6gkFW4Pgv"}Example admin-respond-to-auth-challenge commandaws cognito-idp admin-respond-to-auth-challenge --user-pool-id us-east-1_123456789 --client-id 3n4b5urk1ft4fl3mg5e62d9ado --challenge-name SOFTWARE_TOKEN_MFA --challenge-responses USERNAME=Bob,SOFTWARE_TOKEN_MFA_CODE=123456 --session Xxz6iadwuWJGN4Z7f4ul5p50IHUqITquoaNxxyDvep.......3A6GokZWKeQ6gkFW4PgvImportant: Make sure to replace the following variables with your own information: client-id, username, and software_token_MFA_Code.Example output from admin-respond-to-auth-challenge command{ "AuthenticationResult": { "ExpiresIn": 3600, "RefreshToken": "eyJjdHkiOiJKV1QiLCJlbmMi.......dlbjrtyizlLzZZ5fjjCgL__AVHEzYycjJs_h3i-ly_KixDNtz9VEC", "TokenType": "Bearer", "NewDeviceMetadata": { "DeviceKey": "us-east-1_28abrd7-10f7-9fc6-a931-3ede1c8ckd75", "DeviceGroupKey": "-Gqkj3brS" }, "IdToken": "eyJraWQiOiIzcFFSV29Pb........mNMbE_vvPkQYBuA9ackoER1aSABFGaKK4BpgPjMn7la_A", "AccessToken": "eyJraWQiOi...........qwvQq4awt63TyWw" }, "ChallengeParameters": {}}Follow"
https://repost.aws/knowledge-center/cognito-user-pool-totp-mfa
How do I resolve the error "The new key policy will not allow you to update the key policy in the future" when I try to create an AWS KMS key using AWS CloudFormation?
"When I create an AWS KMS key and define an AWS Key Management Service (AWS KMS) key policy using AWS CloudFormation, the AWS KMS key creation fails. Then, I get the following error message: "The new key policy will not allow you to update the key policy in the future.""
"When I create an AWS KMS key and define an AWS Key Management Service (AWS KMS) key policy using AWS CloudFormation, the AWS KMS key creation fails. Then, I get the following error message: "The new key policy will not allow you to update the key policy in the future."Short descriptionAWS KMS performs safety checks when a key policy is created. One safety check confirms that the principal in the key policy has the required permissions to make the CreateKey API and PutKeyPolicy API. This check eliminates the possibility of the AWS KMS key becoming unmanageable, which means that you can't change the key policy or delete the key.Important: Be sure that the key policy that you create allows the current user to administer the AWS KMS key.ResolutionWhen you create an AWS CloudFormation stack, then an AWS Identity and Access Management (IAM) user or role is used to make the CreateStack API call. This user is also used create resources specified in the AWS CloudFormation template.1.    When you create an AWS KMS key using AWS CloudFormation, choose the same IAM user or role that's the key administrator principal for the AWS KMS key.In the following example, the AWS CloudFormation stack is created by the IAM user arn:aws:iam::123456789012:user/Alice. The principal is designated as the key administrator. The IAM user "Alice" is now allowed to modify the key policy after the key policy is created."Type" : "AWS::KMS::Key", "Properties" : { "Description" : "A sample key", "KeyPolicy" : { "Version": "2012-10-17", "Id": "key-default-1", "Statement": [ { "Sid": "Allow administration of the key", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123456789012:user/Alice" }, "Action": [ "kms:Create*", "kms:Describe*", "kms:Enable*", "kms:List*", "kms:Put*", "kms:Update*", "kms:Revoke*", "kms:Disable*", "kms:Get*", "kms:Delete*", "kms:ScheduleKeyDeletion", "kms:CancelKeyDeletion" ], "Resource": "*" }, { "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123456789012:user/Bob" }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*" } ] } } }2.    Set the principal key administrator, or set the AWS account root user as the principal key administrator.To set the principal key administrator, use the Amazon Resource Name (ARN):If your AWS CloudFormation stack is created by a SAML or web federated user account, set the principal as the user's assumed role for the ARN. For example:"Principal": { "AWS": "arn:aws:sts::123456789012:assumed-role/FederatedAccess/FederatedUsername" }Note: The name of the IAM role is FederatedAccess, and the name of the federated user is FederatedUsername.If the AWS CloudFormation service role is used to create the stack, then set the principal as the service role ARN. For example:"Principal": { "AWS": "arn:aws:iam::123456789012:role/ServiceRoleName” }Note: The name of the AWS CloudFormation service role is ServiceRoleName.To set the AWS account root user as the principal key administrator, see the following example:"Principal": { "AWS": "arn:aws:iam::123456789012:root" }Note: If the principal key administrator is set to the root ARN, be sure you have the correct permissions. The IAM user, role, or service role creating the AWS CloudFormation stack must have the IAM permissions to make the CreateKey and PutKeyPolicy API calls.Related informationAWS Key Management ServiceAuthentication and access control for AWS KMSFollow"
https://repost.aws/knowledge-center/update-key-policy-future
"How can I scale my request rate to S3, to improve request rate performance?"
I expect my Amazon Simple Storage Service (Amazon S3) bucket to get high request rates. What object key naming pattern should I use to get better performance?
"I expect my Amazon Simple Storage Service (Amazon S3) bucket to get high request rates. What object key naming pattern should I use to get better performance?ResolutionAmazon S3 automatically scales by dynamically optimizing performance in response to sustained high request rates. Your application can achieve 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket. There are no limits to the number of prefixes in a bucket. You can increase your read or write performance by using parallelization. If Amazon S3 is optimizing for a new request rate, you receive a temporary HTTP 503 request response until the optimization completes. Because Amazon S3 optimizes its prefixes for request rates, unique key naming patterns are no longer a best practice.For more information about Amazon S3 performance optimization, see Performance guidelines for Amazon S3 and Performance design patterns for Amazon S3.Follow"
https://repost.aws/knowledge-center/s3-object-key-naming-pattern
How can I grant a user access to a specific folder in my Amazon S3 bucket?
I want to grant an AWS Identity and Access Management (IAM) user access to a specific folder in my Amazon Simple Storage Service (Amazon S3) bucket. How can I do that?
"I want to grant an AWS Identity and Access Management (IAM) user access to a specific folder in my Amazon Simple Storage Service (Amazon S3) bucket. How can I do that?ResolutionIf the IAM user and S3 bucket belong to the same AWS account, then you can grant the user access to a specific bucket folder using an IAM policy. As long as the bucket policy doesn't explicitly deny the user access to the folder, you don't need to update the bucket policy if access is granted by the IAM policy. You can add the IAM policy to individual IAM users, or you can attach the IAM policy to an IAM role that multiple users can switch to.If the IAM identity (user or role) and the S3 bucket belong to different AWS accounts, then you must grant access on both the IAM policy and the bucket policy. For more information on cross-account access, see How can I grant a user in another AWS account the access to upload objects to my Amazon S3 bucket?The following example IAM policy allows a user to download objects from the folder DOC-EXAMPLE-BUCKET/media using the Amazon S3 console. The policy includes these statements:AllowStatement1 allows the user to list the buckets that belong to their AWS account. The user needs this permission to be able to navigate to the bucket using the console.AllowStatement2A allows the user to list the folders within DOC-EXAMPLE-BUCKET, which the user needs to be able to navigate to the folder using the console. The statement also allows the user to search on the prefix **media/**using the console.AllowStatement3 allows the user to list the contents within DOC-EXAMPLE-BUCKET/media.AllowStatement4A allows the user to download objects (s3:GetObject) from the folder DOC-EXAMPLE-BUCKET/media.{ "Version":"2012-10-17", "Statement": [ { "Sid": "AllowStatement1", "Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"], "Effect": "Allow", "Resource": ["arn:aws:s3:::*"] }, { "Sid": "AllowStatement2A", "Action": ["s3:ListBucket"], "Effect": "Allow", "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET"], "Condition":{"StringEquals":{"s3:prefix":["","media"]}} }, { "Sid": "AllowStatement3", "Action": ["s3:ListBucket"], "Effect": "Allow", "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET"], "Condition":{"StringLike":{"s3:prefix":["media/*"]}} }, { "Sid": "AllowStatement4A", "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET/media/*"] } ]}As another example, the following IAM policy allows a user to download and upload objects to the folder DOC-EXAMPLE-BUCKET/media using either the console or programmatic methods like the AWS Command Line Interface (AWS CLI) or the Amazon S3 API. The differences from the previous IAM policy are:AllowStatement2B includes "s3:delimiter":["/"], which specifies the forward slash character (/) as the delimiter for folders within the path to an object. It's a best practice to specify the delimiter if the user makes requests using the AWS CLI or the Amazon S3 API.AllowStatement4B allows the user to download (s3:GetObject) and upload (s3:PutObject) objects to the folder DOC-EXAMPLE-BUCKET/media.{ "Version":"2012-10-17", "Statement": [ { "Sid": "AllowStatement1", "Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"], "Effect": "Allow", "Resource": ["arn:aws:s3:::*"] }, { "Sid": "AllowStatement2B", "Action": ["s3:ListBucket"], "Effect": "Allow", "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET"], "Condition":{"StringEquals":{"s3:prefix":["","media"],"s3:delimiter":["/"]}} }, { "Sid": "AllowStatement3", "Action": ["s3:ListBucket"], "Effect": "Allow", "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET"], "Condition":{"StringLike":{"s3:prefix":["media/*"]}} }, { "Sid": "AllowStatement4B", "Effect": "Allow", "Action": ["s3:GetObject", "s3:PutObject"], "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET/media/*"] } ]}Related informationAWS Policy GeneratorFollow"
https://repost.aws/knowledge-center/s3-folder-user-access
How do I automate tasks in my AWS account with Lambda?
I want to use AWS Lambda to automate tasks in my AWS account. How do I set that up?
"I want to use AWS Lambda to automate tasks in my AWS account. How do I set that up?Short descriptionThere are multiple AWS services that you can integrate with Lambda to invoke a function on a schedule, or in response to certain events. For information on what AWS services you can integrate with Lambda, see Using Lambda with other services.This article shows examples for two of the most common AWS services that use Lambda for automating tasks:Amazon EventBridgeAmazon Simple Storage Service (Amazon S3) Event Notifications.For more examples, see Using Lambda with scheduled events and Using Lambda with Amazon S3.ResolutionImportant: EventBridge and Amazon S3 automatically update your Lambda function's execution role, adding required access using resource-based policies. Not all AWS services do this. If you're integrating Lambda with another AWS service, make sure that you add the required permissions manually.Create a Lambda function that logs input into your Amazon CloudWatch LogsNote: The function you create will be the target of the events that you configure. Replace the example function code with your own code for the task that you want to automate in your use case.1.    In the Lambda console, choose Create function. The Create function page opens to the default Author from scratch option.2.    Under Basic information, enter the following:For Function name, enter a name for your function.For Runtime, choose Node.js 14.x.3.    Under Permissions, choose Change default execution role. Then, do one of the following:If you're new to Lambda, choose Create a new role with basic Lambda permissions.If you've already created a Lambda execution role that you want to use, choose Use an existing role.If you want to create a new execution role using an AWS managed policy template, choose Create a new role from AWS policy templates. Then, enter a name and choose a policy template.4.    Choose Create function.5.    On the Configuration pane, under Function code, open up the index.js file. Then, copy and paste the following example function code into the editor pane:'use strict';exports.handler = (event, context, callback) => { console.log('LogScheduledEvent'); console.log('Received event:', JSON.stringify(event, null, 2)); callback(null, 'Finished');};6.    Choose Deploy.For more information, see Create a Lambda function with the console. You can also create a Lambda function by building and uploading your own deployment package or by creating and uploading a container image.(For EventBridge) Create EventBridge rules that trigger on a schedule or in response to an eventFor scheduled eventsTo automate tasks with specific timing and without any input, follow the instructions in Creating an EventBridge rule that triggers on a schedule. Make sure that you specify a schedule for when you want your automated task to run. Add the Lambda function that you created as a target to trigger in response to the event.Note: After you create the rule, your Lambda function is invoked automatically with the timing that you defined. If you used the example function code, a stream of logs from Lambda populates in CloudWatch on schedule.For an example, see Schedule Lambda functions using EventBridge.For service eventsTo automate tasks in response to an event generated by an AWS service, follow the instructions in Creating a rule for an AWS service.For this example setup, use the following configurations when you create the rule:For Service Name, choose EC2.For Event Type, choose EC2 Instance State-change Notification.Add the Lambda function that you created as a target.Note: After you create the rule, your Lambda function is invoked for each occurrence of the event pattern that you defined.For more information, see Event patterns in EventBridge and EventBridge event examples from supported AWS services.To test the EventBridge ruleTo test the EventBridge rule, cause a state change in an Amazon Elastic Compute Cloud (Amazon EC2) instance by stopping or starting the instance. Lambda will send a stream of logs to CloudWatch.For information on how to launch an EC2 instance, see Launch an instance.Note: An EC2 instance can incur charges on your AWS account. If you create an instance for this example only, make sure that you terminate the instance when you're done.For more information, see Getting Started with Amazon EventBridge.(For Amazon S3) Configure an S3 Event Notification to trigger your Lambda functionTo use Amazon S3 Event Notifications to trigger your Lambda function, follow the instructions in Enabling and configuring event notifications.For this example setup, use the following configurations when you create the S3 Event Notification:For Event types, choose the All object create events check box.For Destination, choose Lambda function.In the Lambda function dropdown list, choose the Lambda function that you created earlier.For information on how to create an S3 bucket, see Create your first S3 bucket.To test the Amazon S3 Event NotificationTo test the setup, upload an object to the S3 bucket. If you configured a Prefix or Suffix filter, make sure that the object has the correct prefix or suffix.When uploading is complete, your Lambda function invokes. If you used the example function code, a stream of logs from Lambda populates in CloudWatch. These CloudWatch logs contain metadata from the event object, such as the S3 bucket name and the object name.For an example, see Using Lambda with Amazon S3.Related informationHow can I get customized email notifications when my EC2 instance changes states?Follow"
https://repost.aws/knowledge-center/lambda-automate-tasks
Why is my Kinesis data stream returning a 500 Internal Server Error?
My Amazon Kinesis data stream is returning a 500 Internal Server Error or a 503 Service Unavailable Error. How do I detect and troubleshoot these errors within Amazon Kinesis Data Streams?
"My Amazon Kinesis data stream is returning a 500 Internal Server Error or a 503 Service Unavailable Error. How do I detect and troubleshoot these errors within Amazon Kinesis Data Streams?Short DescriptionIf you are producing to a Kinesis data stream, one of the following internal errors can occur:PutRecord or PutRecords returns an AmazonKinesisException 500 or AmazonKinesisException 503 error with a rate above 1% for several minutes.SubscribeToShard.Success or GetRecords returns an AmazonKinesisException 500 or AmazonKinesisException 503error with a rate above 1% for several minutes.You can troubleshoot these internal errors by doing the following:Calculate your error rate.Implement a retry mechanism.ResolutionCalculate your error rateLook for significant drops in the time windows of either PutRecord.Success or GetRecord.Success under the Monitoring tab. If you notice significant drops, calculate the error rate to determine the severity of your Kinesis data stream issue.To calculate your error rate, compute the average value of PutRecord.Success and GetRecord.Success.Implement a retry mechanismAfter you've calculated your error rate, confirm whether the error rate falls below 0.1%. Kinesis Data Streams allows for high throughput writes with a low error rate. Average error rates are typically below 0.01%.If you wrote your own consumer or producer, implement a retry mechanism in your application code. For more information about retry mechanism implementations, see the Retries section in Implementing efficient and reliable producers with the Amazon Kinesis Producer Library.If your error rate exceeds 1% for several minutes, contact AWS Support. Provide the following information:Applications used to read or write data to/from Data StreamsNumber of shards in your Kinesis data streamServer-side encryption settingsSpecific shard IDs that are impactedTime frame where drops in success rates are observedRequest IDs that are reporting internal failuresRelated InformationDeveloping producers using the Amazon Kinesis Producer LibraryDeveloping KCL 2.x consumersFollow"
https://repost.aws/knowledge-center/kinesis-data-stream-500-error
How do I notify AWS AppSync subscribers of external database updates that client-side mutations don't perform?
I need my app's clients to update in real time when external database changes are made that aren't performed through client-side mutations. How do I use AWS AppSync to notify subscribers of these changes?
"I need my app's clients to update in real time when external database changes are made that aren't performed through client-side mutations. How do I use AWS AppSync to notify subscribers of these changes?Short descriptionUse local resolvers to notify subscribers of external database changes in real time, without making a data source call. For example, local resolvers are useful for apps that regularly update information, such as an airline app.Complete the steps in the Resolution section to create an example GraphQL API. The GraphQL API updates subscribers in real time when data is written to an Amazon DynamoDB table data source.ResolutionCreate a GraphQL API using the wizardUse the AWS AppSync guided schema wizard to create a new GraphQL API. For more information, see Designing a GraphQL API.1.    Open the AWS AppSync console.2.    Choose Create API.3.    On the Getting Started page, under Customize your API or import from Amazon DynamoDB, choose Create with wizard, and then choose Start.4.    On the Create a model page:Under Name the model, enter a name for your model. For this example, Book is the name.Under Configure model fields, define the data types for your app. For this example setup, keep the default field names (id and title) and types.(Optional) Expand Configure model table (optional) to add an index.Choose Create.5.    On the Create resources page, enter a name for your API. Then, choose Create. AWS AppSync creates your API and opens the Queries page of your API.Create a test subscription1.    Open the AWS AppSync console.2.    Navigate to the Queries page of your API, and then, open a duplicate browser tab or window.3.    In the duplicate browser tab or window, clear the contents of the query editor and put in the following query:subscription onCreateBook { onCreateBook { id title }}The preceding query creates a subscription to createBook mutations.4.    Choose the play button (Execute Query). The duplicate browser tab or window is subscribed to createBook mutations.5.    In the original browser tab or window, choose the play button (Execute Query), and then choose createBook to run the mutation. The results show in both the original and duplicate (subscription) browser tabs or windows.6.    After you see the subscription, close the duplicate browser tab or window.Create a None-type data sourceThe None data source type passes the request mapping template directly to the response mapping template.1.    In the original browser tab or window, open the AWS AppSync console.2.    In the left navigation pane, choose Data Sources.3.    Choose Create Data Source.4.    On the New Data Source page, under Create new Data Source, complete the following steps:For Data source name, enter a name. For example, real_time_data.For Data source type, choose None.5.    Choose Create.For more information, see Attaching a data source.Add a mutation to the schemaCreate a second mutation for an administrator to use, or to set off when you update the schema.Update the schema with a mutation that passes database updates to the None type data source.1.    Open the AWS AppSync console.2.    In the left navigation pane, choose Schema.3.    In the schema editor, under type Mutation {, add the following command to create the new mutation type for external updates:createBookExt(input: CreateBookInput!): Book4.    In the schema editor, under type Subscription {, locate the following line:onCreateBook(id: ID, title: String): Book @aws_subscribe(mutations: ["createBook"])5.    Add "createBookExt" to the list of mutations:onCreateBook(id: ID, title: String): Book @aws_subscribe(mutations: ["createBook", "createBookExt"])6.    Choose Save Schema.For more information, see Designing your schema.Attach a resolver to the mutation1.    Open the AWS AppSync console.2.    On the Schema page of your API, under Resolvers, scroll down to Mutation. Or, for Filter types, enter Mutation.3.    Next to createBookExt(...): Book, under Resolver, choose Attach.4.    On the Create new Resolver page, for Data source name, choose the name of the None type data source that you created. For example, real_time_data.5.    Under Configure the request mapping template, locate the request function:export function request(ctx) { return {};}6.    Modify the function to return ctx.args:export function request(ctx) { return ctx.args;}7.    Choose Create.For more information, see Configuring resolvers (VTL).Create a new test subscription1.    Open the AWS AppSync console.2.    In the left navigation pane, choose Queries.3.    On the Queries page of your API, open a duplicate browser tab or window.4.    In the duplicate browser tab or window, clear the contents of the query editor, and put in the following query:subscription onCreateBook { onCreateBook { id title }}5.    Choose the play button (Execute Query). The duplicate browser tab or window is now subscribed to both the createBook and createBookExt mutations.Create a new test mutation1.    In the original browser tab or window, on the Queries page of your API, clear the contents of the query editor. Then, put in the following query:mutation createBook($createbookinput: CreateBookInput!) { createBook(input: $createbookinput) { id title }}In the Query Variables section at the bottom of the editor, clear the contents and put in the following query:{ "createbookinput": { "title": "My New Book" }}The preceding query creates a new Book with the createBook mutation.2.    Choose the play button (Execute Query).3.    In the duplicate (subscription) browser tab or window, note that the subscriber receives the update in real time.(Optional) Refer to example use casesAs you build your client app and apply these concepts, you can use the following example of building an airline app that provides pricing and flight times.The following steps demonstrate how to notify subscribed clients when flight details change in a DynamoDB table:1.    Create an AWS Lambda function that uses a DynamoDB stream as the trigger. When the DynamoDB table updates, it invokes the Lambda function. For more information, see Using AWS Lambda with Amazon DynamoDB.2.    In the Lambda function code, include logic to filter the appropriate updates and perform a mutation call to AWS AppSync. This causes AWS AppSync to notify subscribers through the subscription. For more information, see Tutorial: AWS Lambda resolvers.3.    In AWS AppSync, add a new mutation field (for example, named publishPrice) with a local resolver.4.    Subscribe to that mutation in a subscription field (for example, named onPriceUpdate).Example schematype flightDetails { id: ID! src: String! destn: String! price : Int}type Mutation { # A local resolver targeting a None data source that publishes the message to subscribed clients. publishPrice(price: Int): flightDetails}type Subscription { # Notified whenever *publishPrice* is called. onPriceUpdate: Message @aws_subscribe(mutations: ["publishPrice"])}type Query { ... }For more information, see Designing your schema.5.    Create another AWS Lambda function that uses a DynamoDB stream as the activation. In this function, call the publishPrice mutation. Because the publishPrice mutation has a local resolver, the data isn't written to DynamoDB again. With this method, you can use AWS AppSync as a PubSub broker.For more information and another example use case, see Tutorial: Local resolvers.Related informationResolver tutorials (VTL)Run queries and mutationsResolver mapping template reference (VTL)Follow"
https://repost.aws/knowledge-center/appsync-notify-subscribers-real-time
How do I use the restore tiers in the Amazon S3 console to restore archived objects from Amazon S3 Glacier storage class?
How do I use the restore tiers in the Amazon Simple Storage Service (Amazon S3) console to restore objects from Amazon S3 Glacier Flexible Retrieval storage class?
"How do I use the restore tiers in the Amazon Simple Storage Service (Amazon S3) console to restore objects from Amazon S3 Glacier Flexible Retrieval storage class?ResolutionNote: The information in this article doesn't apply to Amazon S3 Glacier Deep Archive, or S3 Glacier Instant Retrieval.If you archived Amazon S3 objects to S3 Flexible Retrieval storage class using a lifecycle rule, then you can choose from three options to restore them:Expedited retrievalStandard retrievalBulk retrievalFollow these steps to restore an archived object using the Amazon S3 console. Choose a restore tier that meets your needs:1.    Open the Amazon S3 console, and choose the Amazon S3 bucket that stores the archived objects that you want to restore.2.    Select the archived object, and choose Actions.3.    Select Initiate restore, and then specify the number of days that you want the restored file to be accessible for.4.    Select a Retrieval option—Bulk, Standard, or Expedited—and choose Restore.After the restoration is complete, you can download the restored object from the Amazon S3 console. Depending on the retrieval option selected, the restore completes in:1-5 minutes for expedited retrievals3-5 hours for standard retrievals5-12 hours for bulk retrievalsTo confirm that your file is restored, do the following:1.    Choose the Amazon S3 bucket that contains the file.2.    Select the check box next to the file name to view the file details. The Restoration expiry date verifies that the file is restored. This expiration date indicates when Amazon S3 removes the restored copy of your file.3.    Choose Download to download the object as a file to your local client.4.    (Optional) To permanently restore an S3 object, change the object's storage class using the AWS Command Line Interface (AWS CLI). For more information, see How can I restore an S3 object from the Amazon S3 Glacier storage class using the AWS CLI?Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.Related informationRestoring an archived objectFollow"
https://repost.aws/knowledge-center/restore-glacier-tiers
How do I revoke refresh tokens issued by Amazon Cognito?
I want to use APIs or endpoints to revoke refresh tokens generated by Amazon Cognito.
"I want to use APIs or endpoints to revoke refresh tokens generated by Amazon Cognito.ResolutionYou can use APIs and endpoints to revoke refresh tokens generated by Amazon Cognito.Note: You can revoke refresh tokens in real time so that these refresh tokens can't generate access tokens.Prerequisites for revoking refresh tokensTurn on token revocation for an app client to revoke the refresh tokens issued by that app client. The EnableTokenRevocation parameter is turned on by default when you create a new Amazon Cognito user pool client. Before you can revoke a token for an existing user pool client, turn on token revocation within the UpdateUserPoolClient API operation. Include the current settings from your app client and set the EnableTokenRevocation parameter to true.When you turn on token revocation settings, the origin_jti and jti claims are added to the access and ID tokens.The jti claim provides a unique identifier for JSON Web Tokens (JWTs).The jti claim is used to prevent the JWTs from being replayed.The jti value is a case-sensitive string.When the REFRESH_TOKEN authentication flow is used to generate new access and ID tokens, the new access and ID tokens have the same origin_jti claim. The jti claims are different.For more information, see Turn on token revocation and Using tokens with user pools.Expected results of revoking refresh tokensFor information about what to expect when you revoke refresh tokens, including the effect on access tokens and JWTs, see Revoking tokens and RevokeToken.Note: For more information about JWTs, see Verifying a JSON Web Token.Using the RevokeToken API call to revoke refresh tokensThe refresh token to be revoked, the app client ID, and the client secret are required parameters to call the RevokeToken API.Note: The client secret is required only when the client ID has a secret.Request syntax:{ "ClientId": "string", "ClientSecret": "string", "Token": "string"}Example AWS Command Line Interface (AWS CLI) command:Note: Replace the token <value> with your token information. Replace the client-id <value> with your client ID. Replace the client-secret <value> with your app client's secret.aws cognito-idp revoke-token--token <value>--client-id <value>--client-secret <value>Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Example curl command:Note: Replace <region> with your AWS Region. Replace <refresh token> with your token information. Replace <client-id> with your client ID.awscurl \-v \-X POST \--service cognito-idp \--region <region> \-H 'X-Amz-Target: AWSCognitoIdentityProviderService.RevokeToken' \-H 'Content-Type: application/x-amz-json-1.1' \-d '{"ClientId": "<client-id>", "Token": "<refresh-token>"}' \https://cognito-idp.<region>.amazonaws.comUsing the revoke endpoint to revoke refresh tokensThe /oauth2/revoke endpoint is available after you add a domain to your user pool. After the endpoint revokes the tokens, you can't use the revoked tokens to access the APIs that Amazon Cognito tokens authenticate. For information about the /oauth2/revoke endpoint, including request parameters, see Revoke endpoint.Example 1: Revoke token with an app client with no app secret:Note: Replace <region> with your AWS Region. Replace <refresh token> with your refresh token information. Replace <client-id> with your client ID.POST /oauth2/revoke HTTP/1.2 Host: https://mydomain.auth.<region>.amazoncognito.com Accept: application/json Content-Type: application/x-www-form-urlencoded token=<refresh token>& client_id=<client-id>Example 2: Revoke token with an app client with app secret:POST /oauth2/revoke HTTP/1.2 Host: https://mydomain.auth.<region>.amazoncognito.com Accept: application/json Content-Type: application/x-www-form-urlencoded Authorization: Basic czZCaGRSa3F0MzpnWDFmQmF0M2JW token=<refresh token>To learn about setting up an authorization header, see How do I troubleshoot "Unable to verify secret hash for client <client-id>" errors from my user pool?Endpoint errorsFor more information about endpoint errors, see Revoke endpoint and review the information under Revocation error response.Follow"
https://repost.aws/knowledge-center/cognito-revoke-refresh-tokens
How can I access the internet from my WorkSpace?
I want to turn on internet access from my WorkSpace in Amazon WorkSpaces. How can I do that?
"I want to turn on internet access from my WorkSpace in Amazon WorkSpaces. How can I do that?ResolutionThe method to turn on internet access from your WorkSpace differs depending on whether the WorkSpace is located in a private or public subnet. A public subnet sends outbound traffic directly to the internet using an internet gateway route. Instances in a private subnet access the internet using a network address translation (NAT) gateway that resides in the public subnet.Important: The security group for your WorkSpaces must allow outbound traffic on ports 80 (HTTP) and 443 (HTTPS) to all destinations (0.0.0.0/0).Turn on internet access from WorkSpaces located in a public subnetA WorkSpace that's located in a public subnet must meet requirements to turn on internet access. The WorkSpace needs both a route to an internet gateway and a public IP address assignment.Create an internet gateway.Update the route tables for your public subnets. The default route (Destination 0.0.0.0/0) must target the internet gateway.You can assign public IP addresses to your WorkSpaces automatically or manually.Automatically assign public IP addressesYou can automatically assign public IP addresses to your WorkSpaces by turning on internet access on the WorkSpaces directory. After you turn on automatic assignment, each WorkSpace that you launch is assigned a public IP address. For instructions and more information, see Configure automatic IP addresses.Note: WorkSpaces that already exist before you turn on automatic assignment don't receive an Elastic IP address until you rebuild them.Manually assign public IP addressesYou can manually assign a public Elastic IP address using the Amazon Elastic Compute Cloud (Amazon EC2) console. For instructions, see How do I associate an Elastic IP address with a WorkSpace?Turn on internet access from WorkSpaces located in a private subnetIf you use AWS Directory Service for Microsoft Active Directory, then configure the virtual private cloud (VPC) with one public subnet and two private subnets. You must configure your directory for the private subnets. To provide internet access to WorkSpaces in these private subnets, configure a NAT gateway in the public subnet.Create a NAT gateway.Update the route tables for the private subnets. The default route (Destination 0.0.0.0/0) must target the NAT gateway.Related informationProvide internet access from your WorkSpaceConfigure a VPC for WorkSpacesNetworking and access for WorkSpacesFollow"
https://repost.aws/knowledge-center/workspaces-turn-on-internet
How do I stop receiving email spam?
I want to avoid receiving spam and unsolicited emails.
"I want to avoid receiving spam and unsolicited emails.ResolutionAWS doesn't allow the use of AWS infrastructure to send unsolicited mass emails, also known as "spam." This includes content such as promotions, advertising, and solicitation. If you notice any violation of this policy, then follow Amazon's abuse reporting process to help us stop or remedy the violation.To avoid receiving spam, follow these best practices everywhere on the internet:Be careful about who you provide your email address to. Spammers often obtain lists of emails legally because they purchase them from legitimate sources. When you sign up for a website or a service, be sure to read the website's privacy policy. The policy includes information on whether the owner of the website intends to sell information, such as your email address, to third parties.Avoid publishing your email address in public pages or forums. Spammers use bots to scrape public pages and forums for email addresses. Sharing your email address in these pages can make you a potential target for spammers.Tip: Many forums provide direct messaging tools to contact other users privately. If you want to provide your email address to other users of the forum, then use these private messaging tools when available.Avoid replying to phishing emails. Always verify the legitimacy of your email source. Avoid providing personal or sensitive information to unknown senders. You might receive phishing emails in many different forms. Sometimes, the spammers pretend to be from an institution, such as a bank, or pretend that you've won a contest. If you reply to these emails, then the spammers recognize that the email address is valid, and they might sell your email address to other spammers.Use an email address that's hard to guess. Avoid using email addresses that include easy-to-guess names or phrases. Otherwise, spammers are more likely to guess your email address correctly and send you spam emails. Spammers use machine learning and other algorithms to guess possible valid email addresses. Also, avoid using your exact email address as a login for different websites.Related informationReport suspicious emailsHow do I unsubscribe from AWS emails?How do I report abuse of AWS resources?How do I make sure that the email that I received is actually from Amazon?Follow"
https://repost.aws/knowledge-center/stop-email-spam
Why is an Amazon RDS DB instance stuck in the modifying state when I try to increase the allocated storage?
"I want to increase the allocated storage for an Amazon Relational Database Service (Amazon RDS) DB instance, but the operation is stuck in the modifying state."
"I want to increase the allocated storage for an Amazon Relational Database Service (Amazon RDS) DB instance, but the operation is stuck in the modifying state.ResolutionBy design, storage scaling operations for an Amazon RDS DB instance have minimal impact on ongoing database operations. In most cases, the storage scaling operations are completely offloaded to the Amazon Elastic Block Store (Amazon EBS) layer and transparent from the database. This process is typically completed in a few minutes. However, for some legacy Amazon RDS storage volumes, you might require a different process for modifying the size, IOPS, or volume type of your Amazon RDS storage. You might need to make a full copy of the data using a potentially long-running I/O operation.Most RDS volume geometries include either one Amazon EBS volume or four striped EBS volumes in a RAID0 configuration depending on the size of the allocated storage. You must use the legacy method under either of the following conditions:Your RDS instance doesn't have either one or four volumes.The target size for your modification increases the allocated storage beyond 400 GB.You can view the number of volumes in use on your RDS instances using the Enhanced Monitoring metrics. Also, any source volume that uses previous generation EBS volumes require the legacy method for modifying the size of the allocated storage.The following factors can affect the time required to increase the allocated storage of an RDS DB instance:The legacy method uses I/O resources, and this could increase your database workload. It's a best practice to use the minimal impact method whenever possible. The minimal impact method doesn't use any resources on the database. If you must use the legacy method, then it's a best practice to schedule the storage increase operations outside of the peak hours. This might reduce the time required to complete the storage increase operations.If you have high load conditions and must use the legacy method, then you can create a read replica for the RDS DB instance. You can perform the storage scaling operations on the read replica and then promote the read replica DB instance to the primary DB instance.If you have high-load conditions, then do the following:Create a read replica for the RDS DB instance.Perform storage scaling operations on the read replica.Promote the read replica DB instance to the primary DB instance.After a storage modification has started, the operation can't be canceled. The DB instance status is in the modifying state until the Amazon EBS operations are complete. You can restore a DB instance to a specified time or restore from a DB snapshot to create a new DB instance with the original storage configuration. A DB instance that's restored is not in the modifying status.Related informationTroubleshooting for Amazon RDSModifyDBInstanceUsing the Apply Immediately settingFollow"
https://repost.aws/knowledge-center/rds-stuck-modifying
How can I redirect HTTP requests to HTTPS using an Application Load Balancer?
I want to redirect HTTP requests to HTTPS using Application Load Balancer listener rules. How can I do this?
"I want to redirect HTTP requests to HTTPS using Application Load Balancer listener rules. How can I do this?ResolutionConfirm your version of Load BalancerOpen the Amazon Elastic Compute Cloud (Amazon EC2) console.Under Load Balancing in the sidebar, choose Load Balancers.Find the load balancer for which you're creating a listener rule. Note which version is listed under the Type column: application, classic, network, or gateway.The following steps apply only to Application Load Balancer. If you're using Classic Load Balancer, then see How do I redirect HTTP traffic to HTTPS on my Classic Load Balancer?Note: You must create a target group before following the steps below.Create an HTTP listener rule that redirects HTTP requests to HTTPSOpen the Amazon EC2 console.Under Load Balancing in the sidebar, choose Load Balancers.Select a load balancer, and then choose Listeners, Add listener. Note: Skip to step 6 if you already have an HTTP listener.For Protocol: port, choose HTTP. You can either keep the default port or specify a custom port.For Default actions, choose Add action, redirect to, and then enter port 443 (or a different port if you’re not using the default). For more details, see Rule action types. To save, choose the checkmark icon. Note: If you created a new HTTP listener following steps 3-5 in this set of tasks, skip to Create an HTTPS listener.Select a load balancer, and then choose HTTP Listener.Under Rules, choose View/edit rules.Choose Edit Rule to modify the existing default rule to redirect all HTTP requests to HTTPS. Or, insert a rule between the existing rules (if appropriate for your use case).Under Then, delete the existing condition. Then, add the new condition with the Redirect to action.For HTTPS, enter 443 port.Keep the default for the remaining options.Note: If you want to change the URL or return code, you can modify these options as needed.To save, choose the checkmark icon.Create an HTTPS listenerNote: If you already have an HTTPS listener with a rule to forward requests to the respective target group, skip to Verify that the security group of the Application Load Balancer allows traffic on 443.Choose Listeners, Add listener.For Protocol: port, choose HTTPS. Keep the default port or specify a custom port.For Default actions, choose Add action, Forward to.Select a target group that hosts application instances.Select a predefined security policy that's best suited for your configuration.Choose Default Security Certificate. (If you don’t have one, you can create a security certificate.)Choose Save.Verify that the security group of the Application Load Balancer allows traffic on 443Choose the load balancer's Description.Under Security, choose Security group ID.Verify the inbound rules. The security group must have an inbound rule that permits traffic on HTTP and HTTPS. If there are no inbound rules, complete the following steps to add them.To add inbound rules (if you don't already have them):Choose Actions, Edit Inbound Rules to modify the security group.Choose Add rule.For Type, choose HTTPS.For Source, choose Custom (0.0.0.0/0 or Source CIDR).Choose Save.Follow"
https://repost.aws/knowledge-center/elb-redirect-http-to-https-using-alb
Why is my CloudFormation stack stuck in an IN_PROGRESS state?
"My AWS CloudFormation stack is stuck in the CREATE_IN_PROGRESS, UPDATE_IN_PROGRESS, UPDATE_ROLLBACK_IN_PROGRESS, or DELETE_IN_PROGRESS state."
"My AWS CloudFormation stack is stuck in the CREATE_IN_PROGRESS, UPDATE_IN_PROGRESS, UPDATE_ROLLBACK_IN_PROGRESS, or DELETE_IN_PROGRESS state.Short descriptionIn most situations, you must wait for your CloudFormation stack to time out. The timeout length varies, and is based on the individual resource stabilization requirements that CloudFormation waits for to reach the desired state.You can control stack timeout and use rollback triggers to control the length of time that CloudFormation waits. For more information on rollback triggers, see Use AWS CloudFormation stack termination protection and rollback triggers to maintain infrastructure availability.ResolutionIdentify the stuck resource1.    Open the CloudFormation console.2.    In the navigation pane, choose Stacks, and then select the stack that's in a stuck state.3.    Choose the Resources tab.4.    In the Resources section, refer to the Status column. Find any resources that are stuck in the create, update, or delete process.Note: These resources might be in the state CREATE_IN_PROGRESS, UPDATE_IN_PROGRESS, or DELETE_IN_PROGRESS.5.    In the AWS Management Console, inspect your resources for the service that corresponds to your resources.Note: The console varies depending on the resource that's stuck. For example, if an Amazon Elastic Container Service (Amazon ECS) service is stuck in the create state, then check that resource in the Amazon ECS console.Check the AWS CloudTrail logsIf the resource doesn’t show any errors in its corresponding console, then use AWS CloudTrail logs to troubleshoot the issue. For information on viewing CloudTrail logs, see Viewing events with CloudTrail Event history.1.    Open the CloudFormation console.2.    In the navigation pane, choose Stacks, and then select the stack that's in a stuck state.3.    Choose the Resources tab.4.    In the Resources section, refer to the Status column. Find any resources that are stuck in the create, update, or delete process.Note: These resources might be in the state CREATE_IN_PROGRESS, UPDATE_IN_PROGRESS, or DELETE_IN_PROGRESS.5.    Choose the Events tab, and then note the timestamp when CloudFormation initialized the creation of that stuck resource.6.    Open the CloudTrail console.7.    In the navigation pane, choose Event history.8.    For Time range, enter the date and time for the timestamp that you noted in step 5 for the starting time (From). For the ending time (To), enter a date and time that's five minutes past the starting time.Note: For example, suppose that CloudFormation initialized the creation of your stuck resource at 9:00 AM on 2020-01-01. In this case, enter 09:00 AM on 2020-01-01 as your starting time and 9:05 AM on 2020-01-01 as your ending time.9.    Choose Apply.10.    In the returned list of events, find the API calls that are related to the create or update API call of your resource. For example, you can find ModifyVolume for Amazon Elastic Block Store (Amazon EBS) volume updates.Tip: Wait a few minutes for the API calls to show up in the CloudTrail logs. API calls don't always appear immediately in the logs.Bypass the timeoutThere are multiple reasons why a stack can become stuck. Therefore, the resolution varies depending on the resource that's stuck. In some cases, you can bypass the timeout to quickly resolve your stack's status. For example, you might be able to bypass the timeout for custom resources and Amazon ECS services. See the following resources for more information:How can I stop my Amazon ECS service from failing to stabilize in AWS CloudFormation?How do I delete a Lambda-backed custom resource that's stuck in DELETE_FAILED status or DELETE_IN_PROGRESS status in CloudFormation?If the stack is stuck in the CREATE_IN_PROGRESS or UPDATE_IN_PROGRESS state, then you can stop the progress using stack operations:CREATE_IN_PROGRESS: Delete the stack to stop the creation process.UPDATE_IN_PROGRESS: Cancel the stack update.Note: To understand the root cause of the issue and avoid it in future deployments, refer to the Troubleshooting CloudFormation guide.Follow"
https://repost.aws/knowledge-center/cloudformation-stack-stuck-progress
How can I troubleshoot the pod status ErrImagePull and ImagePullBackoff errors in Amazon EKS?
My Amazon Elastic Kubernetes Service (Amazon EKS) pod status is in the ErrImagePull or ImagePullBackoff status.
"My Amazon Elastic Kubernetes Service (Amazon EKS) pod status is in the ErrImagePull or ImagePullBackoff status.Short descriptionIf you run the kubectl command get pods and your pods are in the ImagePullBackOff status, then the pods aren't running correctly. The ImagePullBackOff status means that a container couldn't start because an image was unable to get retrieved or pulled. For more information, see Amazon EKS connector pods are in ImagePullBackOff status.You might receive an ImagePull error if:An image name, tag, or digest are incorrect.The images require credentials to authenticate.The registry isn't accessible.Resolution1. Check the pod status, error message, and verify that the image name, tag, and SHA are correctTo get the status of a pod, run the kubectl command get pods:$ kubectl get pods -n defaultNAME READY STATUS RESTARTS AGEnginx-7cdbb5f49f-2p6p2 0/1 ImagePullBackOff 0 86sTo get the details of a pods error message, run the kubectl command describe pod:$ kubectl describe pod nginx-7cdbb5f49f-2p6p2...Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m23s default-scheduler Successfully assigned default/nginx-7cdbb5f49f-2p6p2 to ip-192-168-149-143.us-east-2.compute.internal Normal Pulling 2m44s (x4 over 4m9s) kubelet Pulling image "nginxx:latest" Warning Failed 2m43s (x4 over 4m9s) kubelet Failed to pull image "nginxx:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for nginxx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Warning Failed 2m43s (x4 over 4m9s) kubelet Error: ErrImagePull Warning Failed 2m32s (x6 over 4m8s) kubelet Error: ImagePullBackOff Normal BackOff 2m17s (x7 over 4m8s) kubelet Back-off pulling image "nginxx:latest"$ kubectl describe pod nginx-55d75d5f56-qrqmp ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m20s default-scheduler Successfully assigned default/nginx-55d75d5f56-qrqmp to ip-192-168-149-143.us-east-2.compute.internal Normal Pulling 40s (x4 over 2m6s) kubelet Pulling image "nginx:latestttt" Warning Failed 39s (x4 over 2m5s) kubelet Failed to pull image "nginx:latestttt": rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:latestttt not found: manifest unknown: manifest unknown Warning Failed 39s (x4 over 2m5s) kubelet Error: ErrImagePull Warning Failed 26s (x6 over 2m5s) kubelet Error: ImagePullBackOff Normal BackOff 11s (x7 over 2m5s) kubelet Back-off pulling image "nginx:latestttt"Make sure that your image tag and name exist and are correct. If the image registry requires authentication, make sure that you are authorized to access it. To verify that the image used in the pod is correct, run the following command:$ kubectl get pods nginx-7cdbb5f49f-2p6p2 -o jsonpath="{.spec.containers[*].image}" | \sortnginxx:latestTo understand the pod status values, see Pod phase on the Kubernetes website.For more information, see How can I troubleshoot the pod status in Amazon EKS?2. Amazon Elastic Container Registry (Amazon ECR) imagesIf you're trying to pull images from Amazon ECR using Amazon EKS, additional configuration might be required. If your image is stored in an Amazon ECR private registry, make sure that you specify the credentials imagePullSecrets on the pod. These credentials are used to authenticate with the private registry.Create a Secret named it regcred:kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>Be sure to replace the following credentials:<your-registry-server> is your Private Docker Registry FQDN. Use https://index.docker.io/v1/ for DockerHub.<your-name> is your Docker username.<your-pword> is your Docker password.<your-email> is your Docker email.You have successfully set your Docker credentials in the cluster as a Secret named regcred.To understand the contents of the regcred Secret, view the Secret in YAML format:kubectl get secret regcred --output=yamlIn the following example, a pod needs access to your Docker credentials in regcred:apiVersion: v1kind: Podmetadata: name: private-regspec: containers: - name: private-reg-container image: <your-private-image> imagePullSecrets: - name: regcredReplace your.private.registry.example with the path to an image in a private registry similar to the following:your.private.registry.example.com/bob/bob-private:v1To pull the image from the private registry, Kubernetes requires the credentials. The imagePullSecrets field in the configuration file specifies that Kubernetes must get the credentials from a Secret named regcred.For more options with creating a Secret, see create a Pod that uses a Secret to pull an image on the Kubernetes website.3. Registry troubleshootingIn the following example, the registry is inaccessible due to a network connectivity issue because kubelet isn't able to reach the private registry endpoint:$ kubectl describe pods nginx-9cc69448d-vgm4m...Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 16m default-scheduler Successfully assigned default/nginx-9cc69448d-vgm4m to ip-192-168-149-143.us-east-2.compute.internal Normal Pulling 15m (x3 over 16m) kubelet Pulling image "nginx:stable" Warning Failed 15m (x3 over 16m) kubelet Failed to pull image "nginx:stable": rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Warning Failed 15m (x3 over 16m) kubelet Error: ErrImagePull Normal BackOff 14m (x4 over 16m) kubelet Back-off pulling image "nginx:stable" Warning Failed 14m (x4 over 16m) kubelet Error: ImagePullBackOffThe error "Failed to pull image..." means that kubelet tried to connect to the Docker Registry endpoint and failed due to a connection timeout.To troubleshoot this error, check your subnet, security groups, and network ACL that allow communication to the specified registry endpoint.In the following example, the registry rate limit has exceeded:$ kubectl describe pod nginx-6bf9f7cf5d-22q48...Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m54s default-scheduler Successfully assigned default/nginx-6bf9f7cf5d-22q48 to ip-192-168-153-54.us-east-2.compute.internal Warning FailedCreatePodSandBox 3m33s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "82065dea585e8428eaf9df89936653b5ef12b53bef7f83baddb22edc59cd562a" network for pod "nginx-6bf9f7cf5d-22q48": networkPlugin cni failed to set up pod "nginx-6bf9f7cf5d-22q48_default" network: add cmd: failed to assign an IP address to container Warning FailedCreatePodSandBox 2m53s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "20f2e27ba6d813ffc754a12a1444aa20d552cc9d665f4fe5506b02a4fb53db36" network for pod "nginx-6bf9f7cf5d-22q48": networkPlugin cni failed to set up pod "nginx-6bf9f7cf5d-22q48_default" network: add cmd: failed to assign an IP address to container Warning FailedCreatePodSandBox 2m35s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d9b7e98187e84fed907ff882279bf16223bf5ed0176b03dff3b860ca9a7d5e03" network for pod "nginx-6bf9f7cf5d-22q48": networkPlugin cni failed to set up pod "nginx-6bf9f7cf5d-22q48_default" network: add cmd: failed to assign an IP address to container Warning FailedCreatePodSandBox 2m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c02c8b65d7d49c94aadd396cb57031d6df5e718ab629237cdea63d2185dbbfb0" network for pod "nginx-6bf9f7cf5d-22q48": networkPlugin cni failed to set up pod "nginx-6bf9f7cf5d-22q48_default" network: add cmd: failed to assign an IP address to container Normal SandboxChanged 119s (x4 over 3m13s) kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulling 56s (x3 over 99s) kubelet Pulling image "httpd:latest" Warning Failed 56s (x3 over 99s) kubelet Failed to pull image "httpd:latest": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Warning Failed 56s (x3 over 99s) kubelet Error: ErrImagePull Normal BackOff 43s (x4 over 98s) kubelet Back-off pulling image "httpd:latest" Warning Failed 43s (x4 over 98s) kubelet Error: ImagePullBackOffThe Docker registry rate limit is 100 container image requests per six hours for anonymous usage, and 200 for Docker accounts. Image requests exceeding these limits are denied access until the six hour window elapses. To manage usage and understand registry rate limits, see Understanding Your Docker Hub Rate Limit on the Docker website.Related informationAmazon EKS troubleshootingHow do I troubleshoot Amazon ECR issues with Amazon EKS?Security best practices for Amazon EKSFollow"
https://repost.aws/knowledge-center/eks-pod-status-errors
How do I avoid the "Unable to validate the following destination configurations" error with Lambda event notifications in CloudFormation?
"My stack fails when I deploy an AWS CloudFormation template. Then, I receive an error similar to the following: "Unable to validate the following destination configurations.""
"My stack fails when I deploy an AWS CloudFormation template. Then, I receive an error similar to the following: "Unable to validate the following destination configurations."Short descriptionYou receive this error when you deploy a CloudFormation template with the following resources:An AWS Lambda function resourceAn Amazon Simple Storage Service (Amazon S3) bucket resource with a NotificationConfiguration property that references the Lambda functionA Lambda permission resource with FunctionName and SourceArn properties that match the Lambda function and the S3 bucketAmazon S3 must validate the notification configuration when it creates the bucket. The validation is done by checking if the bucket has permission to push events to the Lambda function. The permission resource (which must exist for this check to pass) requires the bucket name. This means that the permission resource depends on the bucket, and the bucket depends on the permission resource.Note: You receive the "Circular dependency between resources" error if you try to resolve this issue by implementing a DependsOn resource attribute similar to the following code example.The following example shows a circular dependency between the S3 bucket resource and the SourceArn property of the Lambda permission resource."Resources": { "MyS3BucketPermission": { "Type": "AWS::Lambda::Permission", "Properties": { "Action": "lambda:InvokeFunction", ... ... "SourceArn": { "Ref": "MyS3Bucket" } } }, "MyS3Bucket": { "DependsOn" : "MyS3BucketPermission", ... ...Important: It's a best practice to add the SourceAccount property to the Lambda permission resource for Amazon S3 event sources. You add the property because an Amazon Resource Name (ARN) for Amazon S3 doesn't include an account ID. The SourceArn property is adequate for most other event sources, but consider adding the SourceAccount property for Amazon S3 event sources. This prevents users from re-creating a bucket that you deleted, and then granting a new bucket owner full permissions to invoke your Lambda function.ResolutionYou can avoid circular dependencies by using the Fn::Sub intrinsic function with stack parameters. You can also use Fn::Join to combine strings.In the following sample template, the S3 bucket name BucketPrefix is a parameter for AWS::S3::Bucket and AWS::Lambda::Permission resources.Note: The following example assumes that the bucket name wasn't used previously with your AWS accounts. If you want to reuse a template with this code snippet, then you must provide a different bucket prefix every time.{ "AWSTemplateFormatVersion": "2010-09-09", "Parameters": { "BucketPrefix": { "Type": "String", "Default": "test-bucket-name" } }, "Resources": { "EncryptionServiceBucket": { "DependsOn": "LambdaInvokePermission", "Type": "AWS::S3::Bucket", "Properties": { "BucketName": { "Fn::Sub": "${BucketPrefix}-encryption-service" }, "NotificationConfiguration": { "LambdaConfigurations": [ { "Function": { "Fn::GetAtt": [ "AppendItemToListFunction", "Arn" ] }, "Event": "s3:ObjectCreated:*", "Filter": { "S3Key": { "Rules": [ { "Name": "suffix", "Value": "zip" } ] } } } ] } } }, "LambdaInvokePermission": { "Type": "AWS::Lambda::Permission", "Properties": { "FunctionName": { "Fn::GetAtt": [ "AppendItemToListFunction", "Arn" ] }, "Action": "lambda:InvokeFunction", "Principal": "s3.amazonaws.com", "SourceAccount": { "Ref": "AWS::AccountId" }, "SourceArn": { "Fn::Sub": "arn:aws:s3:::${BucketPrefix}-encryption-service" } } }, "AppendItemToListFunction": { "Type": "AWS::Lambda::Function", "Properties": { "Handler": "index.handler", "Role": { "Fn::GetAtt": [ "LambdaExecutionRole", "Arn" ] }, "Code": { "ZipFile": { "Fn::Join": [ "", [ "var response = require('cfn-response');", "exports.handler = function(event, context) {", " var responseData = {Value: event.ResourceProperties.List};", " responseData.Value.push(event.ResourceProperties.AppendedItem);", " response.send(event, context, response.SUCCESS, responseData);", "};" ] ] } }, "Runtime": "nodejs12.x" } }, "LambdaExecutionRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com" ] }, "Action": [ "sts:AssumeRole" ] } ] }, "Path": "/", "Policies": [ { "PolicyName": "root", "PolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:*" ], "Resource": "arn:aws:logs:*:*:*" } ] } } ] } } }}The template avoids a circular dependency because it creates the resources in the following order:AWS Identity and Access Management (IAM) roleLambda functionLambda permissionS3 bucketNow, Amazon S3 can verify its notification configuration and create the bucket without any issues.You can also try the following resolutions:Create the S3 bucket without a notification configuration, and then add the bucket in the next stack update.Create a less-constrained Lambda permission. For example, allow invocations for a specific AWS account by omitting SourceArn.Create a custom resource to run at the end of the stack workflow. This resource adds the notification configuration to the S3 bucket after all other resources are created.Related informationHow do I avoid the "Unable to validate the following destination configurations" error in AWS CloudFormation?Using AWS Lambda with Amazon S3 EventsRefFn::GetAttFollow"
https://repost.aws/knowledge-center/unable-validate-circular-dependency-cloudformation
How do I resolve a yellow or red health status warning in my Elastic Beanstalk environment?
I want to repair the health status of my AWS Elastic Beanstalk environment when it is in yellow (Warning) or red (Degraded) status.
"I want to repair the health status of my AWS Elastic Beanstalk environment when it is in yellow (Warning) or red (Degraded) status.Short descriptionA yellow or red health status warning in your Elastic Beanstalk environment can result from some of the following common issues:The health agent is reporting an insufficient amount of data on an Amazon Elastic Compute Cloud (Amazon EC2) instance.An operation is in progress on an instance within the command timeout.An Elastic Beanstalk environment is updating.Load balancer health checks are failing.The health agent is reporting a high number of request failures.An environment resource, such as an instance, is unavailable.An operation on an instance is taking a long time.An instance is in a Severe state.The Elastic Beanstalk health daemon failed.The Elastic Beanstalk environment failed one or more health checks.Elastic Beanstalk is receiving an increased number of 4xx or 5xx HTTP return codes.There are deployment failures with command timeouts.For more information on warnings, see Health colors and statuses.ResolutionIdentify the cause of the health warningOpen the Elastic Beanstalk console.Choose your application.In the navigation pane, choose Events.In the Type column, look for recent events with a Severity type of WARN, and then note these events for troubleshooting later on.In the navigation pane, choose Dashboard.In the Health section, choose Causes.Now, you can view the overall health of your environment on the Enhanced Health Overview page.For more information, see Enhanced health monitoring with the environment management console.Troubleshoot the identified cause of the health warningBased on the health issues that you identify in the Enhanced Health Overview page, choose one of the following troubleshooting approaches:For failing load balancer health checks see How do I troubleshoot ELB health checks with Elastic Beanstalk?For other health check failures, see Basic health reporting, or see Enhanced health reporting and monitoring if you're using enhanced health reporting.For operations that are taking too long, identify the operation in progress using the Elastic Beanstalk event stream. Or, monitor the /var/log/eb-engine.log by logging in to your Amazon EC2 instance.Note: Operations that take longer than usual are typically environment deployments or configuration updates.For an increased number of 4xx and 5xx HTTP return codes, identify the cause by monitoring the access logs of the proxy server. Then, compare the access logs with the application logs to identify the pattern of increased errors. For more information, see Common errors.Note: The proxy server logs can be the access logs for Apache ( /var/log/httpd/access_log), NGINX ( /var/log/nginx/access_log), or Internet Information Services (C:\inetpub\logs\LogFiles), depending on your platform.For instances in a Severe state, choose a solution based on the warning issued. For more information, see Troubleshoot EC2 instances.Note: Your instances can be in a Severe state due to an ongoing deployment, failed health daemon on the Amazon EC2 instance, or high resource utilization. In most cases, the warning state in your environment is temporary and transitions to green (OK) after you address the cause of the issue. For more information, see Health colors and statuses.For a failed Elastic Beanstalk health daemon, log in to your Amazon EC2 instance and monitor /var/log/messages and /var/log/healthd/daemon.log to identify the cause.Note: If you see a message saying None of the instances are sending data, then see Resolving errors from EC2 instances failing to communicate.For any warnings related to CPU or memory utilization issues, check How do I troubleshoot memory and CPU issues in Elastic Beanstalk?Follow"
https://repost.aws/knowledge-center/elastic-beanstalk-yellow-warning
How do I download logs from the Elastic Beanstalk console without receiving an Access Denied error?
I want to download logs from the AWS Elastic Beanstalk console without receiving an Access Denied error or the logs page stuck in loading.
"I want to download logs from the AWS Elastic Beanstalk console without receiving an Access Denied error or the logs page stuck in loading.Short descriptionWhen you request tail logs in the Elastic Beanstalk environment console or with eb logs, the most recent log entries link together. The log entries link together into a single text file and are uploaded to Amazon Simple Storage Service (Amazon S3) by an instance in your environment.When you request bundle logs, an instance in your environment packages the full log files into a ZIP archive and uploads it to Amazon S3.The instances in your environment must have an Elastic Beanstalk instance profile with permission (s3:Get* ,s3:List*, s3:PutObject) to write to your Amazon S3 bucket. These permissions are included in the default instance profile. If you're using a custom instance profile role, then include these permissions.To resolve the Access Denied errors or logs stuck in download when trying to retrieve logs from your AWS Elastic Beanstalk console, check the following:Amazon S3 user permissionsAmazon S3 bucket policyAmazon S3 bucket encrypted with KMS keyAmazon S3 gateway endpoint policyService control policies (SCP)Resource utilizationResolutionAmazon S3 user permissionsElastic Beanstalk uses user permissions to save or upload logs to your Elastic Beanstalk S3 bucket. AWS Identity and Access Management (IAM) users must have the following permissions to retrieve logs from the Elastic Beanstalk console:s3:PutObjects3:GetObjects3:GetBucketAcls3:PutObjectAclNote: Your user policy must also have the s3:DeleteObject permission because Elastic Beanstalk uses your user permissions to delete the logs from Amazon S3.Example:{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:GetBucketAcl", "s3:PutObjectAcl" ], "Resource": "*" } ]}Amazon S3 bucket policyCheck your Elastic Beanstalk Amazon S3 bucket policy and make sure the PutObject permission is allowed for your instance profile. The PutObject permission is automatically allowed for your default instance profile (aws-elasticbeanstalk-ec2-role). If you're using a custom instance profile, then make sure to add the PutObject permission.Example:{ "Sid": "eb-ad78f54a-f239-4c90-adda-49e5f56cb51e", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::123456789012 :role/aws-elasticbeanstalk-ec2-role", "arn:aws:iam::126355979347:role/custom-instance-profile-role" ] }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::elasticbeanstalk-ap-south-1-123456789012/resources/environments/logs/*" },Amazon S3 bucket encrypted with KMS keyKMS key encryption can be added to the Amazon S3 buckets used by Elastic Beanstalk. When KMS key encryption is added, the presigned URL generated by the Elastic Beanstalk Pull Bundle Logs action on the console fails. This failure is indicated by an Access Denied error.As a workaround, you can manually download the bundle logs from the Amazon S3 bucket locations. For more information, see Log location in Amazon S3.Amazon S3 gateway endpoint policyAn Elastic Beanstalk environment can be created in private subnets using Amazon Virtual Private Cloud (Amazon VPC) endpoints. For this scenario, you must have an Amazon S3 gateway endpoint to communicate with instances and retrieve files such as UserdataBootstrap.sh and platform.zip. Check whether there are any user restrictions at the Amazon S3 gateway endpoint level. For more information, see Gateway endpoints for Amazon S3.Service control policiesIf your permissions are correct and you still receive an Access Denied error, then check whether an organizational policy is turned on for your account. For more information, see Service control policies (SCPs).Resource utilizationIf all permissions and policies are correctly configured, then check your resource utilization for your Amazon Elastic Compute Cloud (Amazon EC2) instance. If the server is over-utilized, such as with high CPU or memory usage, then you your logs can be stuck in download. To resolve this, change your instance type to increase your CPU and memory storage. For example, you can change it from t2.micro to t2.medium.Follow"
https://repost.aws/knowledge-center/elastic-beanstalk-download-logs-error
How do I perform Git operations on an AWS CodeCommit repository with an instance role on Amazon EC2 instances for Windows?
I want to perform Git operations on an AWS CodeCommit repository from an Amazon Elastic Compute Cloud (Amazon EC2) instance that runs Windows.
"I want to perform Git operations on an AWS CodeCommit repository from an Amazon Elastic Compute Cloud (Amazon EC2) instance that runs Windows.Short descriptionSet up the AWS Command Line Interface (AWS CLI) credential helper to perform Git operations on an AWS CodeCommit repository. Then, create an IAM role on your Amazon EC2 instance to perform pull and push actions.Note: Credential helper is the only connection method that doesn't require an IAM user for CodeCommit repositories.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.1.    Create an IAM role for your EC2 instance, and then attach the following example IAM policy to the role. Replace arn:aws:codecommit:us-east-1:111111111111:SampleRepoName with the Amazon Resource Name (ARN) of your CodeCommit repository.{     "Version": "2012-10-17",     "Statement": [         {             "Effect": "Allow",             "Action": [                 "codecommit:GitPull",                 "codecommit:GitPush"             ],             "Resource": "arn:aws:codecommit:us-east-1:111111111111:SampleRepoName"         }     ] }Note: The policy for step 1 allows the IAM role to perform Git pull and push actions on the CodeCommit repository. For more examples on using IAM policies for CodeCommit, see Using identity-based policies (IAM Policies) for CodeCommit.2.    Attach the IAM role that you created in step 1 to an instance.3.    Install Git on your instance. For information on Windows instances, see Downloads on the Git website.4.    Check the Git version to confirm that Git is properly installed:C:\Users\Administrator> git --version5.    Check the AWS CLI version to confirm that AWS CLI is installed:C:\Users\Administrator> aws --version6.    To set up the credential helper on the Amazon EC2 instance, run the following commands:C:\Users\Administrator> git config --global credential.helper "!aws codecommit credential-helper $@"C:\Users\Administrator> git config --global credential.UseHttpPath trueNote: The commands in step 6 specify the use of the Git credential helper with the AWS credential profile. The credential profile allows Git to authenticate with AWS to interact with CodeCommit repositories. To authenticate, Git uses HTTPS and a cryptographically-signed version of your instance role.7.    To configure your name and email address explicitly, run the following commands:C:\Users\Administrator> git config --global user.email "testuser@example.com"C:\Users\Administrator> git config --global user.name "testuser"8.    To clone the repository to the instance, copy the clone URL from the appropriate CodeCommit repository:C:\Users\Administrator> git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/SampleRepoName9.    Create a commit in your CodeCommit repository.Related informationSet up the credential helperHow do I perform Git operations on an AWS CodeCommit repository with an instance role on Amazon EC2 instances for Amazon Linux 2?Follow"
https://repost.aws/knowledge-center/codecommit-git-repositories-ec2-windows
How do I troubleshoot write latency spikes in my Amazon RDS DB instance?
I want to troubleshoot write latency spikes in my Amazon Relational Database Service (Amazon RDS) DB instance.
"I want to troubleshoot write latency spikes in my Amazon Relational Database Service (Amazon RDS) DB instance.Short descriptionThe Amazon CloudWatch metric WriteLatency defines the average amount of time taken per disk I/O operation. Ideally, write latency must not be more than a single digit millisecond.The spike in write latency for your DB instance might be caused when you do the following:Create an Amazon RDS backup of your Single-AZ database instanceConvert a Single-AZ RDS instance to a Multi-AZ instanceRestore from a DB snapshotPerform a point-in-time restore for your DB instanceCreate a new read replicaThe spike might also result from IOPS or throughput bottleneck caused due to heavy workload on the database.ResolutionTroubleshoot latency spike1.    To troubleshoot the causes for a high read or write latency on your DB instance, check the following CloudWatch metrics:ReadLatency and WriteLatencyReadIOPS and WriteIOPSReadThroughput and WriteThroughputDiskQueueDepthBurstBalance (for gp2 storage)Suppose that you notice one or more of the following:The latency values are high.The throughput and IOPS values have reached their maximum limits.The value of DiskQueueDepth is high.The value of BurstBalance is low (for gp2).This means that your RDS instance is under heavy workload and needs more resources. For more information, see How do I troubleshoot the latency of Amazon EBS volumes caused by an IOPS bottleneck in my Amazon RDS instance?To troubleshoot issues that cause an IOPS or throughput bottleneck, do the following:For an RDS instance with General Purpose SSD (gp2), check the DB instance class and storage size.For an RDS instance with Provisioned IOPS (io1), check the DB instance class and defined Provisioned IOPS.For more information, see DB instance classes and Amazon EBS–optimized instances.2.    If CloudWatch metrics don't indicate any resource throttling, then check the Read IO/s and Write IO/s using Enhanced Monitoring.CloudWatch metrics are recorded at an interval of 60 seconds. Therefore, every spike or drop might not be recorded. However, you can set up the granularity of Enhanced Monitoring for up to one second to capture data. Any spike in resource utilization within a 60-second interval can be captured by Enhanced Monitoring. For more information, see How can I identify if my EBS volume is micro-bursting and how can I prevent this from happening?3.    If all the preceding checks don't indicate the cause for the issue, then check the CloudWatch metrics NetworkReceiveThroughput and NetworkTransmitThroughput to make sure that there are no issues with the network.Mitigate the impact of lazy loadingWhen you restore an RDS database instance from a snapshot, the DB instance continues to load data in the background. This process is known as lazy loading.Lazy Loading might happen in all scenarios that require restoring from a snapshot, such as point-in-time restore, conversion of Single-AZ instance to Multi-AZ instance, and creating a new read replica. If you try to access data that isn't loaded yet, the DB instance immediately downloads the requested data from Amazon Simple Storage Service (Amazon S3). Then, the instance continues to load the rest of the data in the background. For more information, see Amazon EBS snapshots. To help mitigate the effects of lazy loading on tables to which you require quick access, you can perform operations that involve full-table scans, such as SELECT *. This allows RDS to download all of the backed-up table data from Amazon S3.Follow best practicesKeep the following best practices and workarounds in mind when dealing with high write latency in your DB instance:Be sure that you have enough resources allocated to your database to run queries. With RDS, the amount of resources allocated depends on the instance type.If neither CloudWatch metrics nor Enhanced Monitoring metrics indicate resource throttling, then use Performance Insights to monitor the database workload. Using Performance Insights, you can identify the underlying SQL queries running in your database when you are experiencing latency with your application. You can use this information to assess the load on your database and determine further actions. For more information, see Monitoring DB load with Performance Insights on Amazon RDS.Prevent micro-bursting by changing the size or type of the Amazon EBS volume according to your use case.To optimize database performance, make sure that your queries are properly tuned.If you're converting your Single-AZ database instance to a Multi-AZ instance, consider doing so during off-business hours.To reduce the impact of lazy loading after a Multi-AZ conversion, consider doing one of the following:Perform a manual failover soon after converting to Multi-AZ instance.Run a full dump or just the required queries to load all the data from the tables. This process can help with loading the data and forcing all blocks to be pushed from S3 into the new host. For Amazon RDS for PostgreSQL instances, you can run the pg_prewarm command.You can configure Amazon CloudWatch alarms on RDS key metrics that are useful for determining the reason for Write Latency spikes in your RDS instances. Examples of these metrics include ReadIOPS, WriteIOPS, ReadThroughput, WriteThroughput, DiskQueueDepth, ReadLatency, and WriteLatency. You can use these alarms to make sure that the instance doesn't throttle.Related informationBest practices for Amazon RDSUnderstanding burst vs. baseline performance with Amazon RDS and GP2Modifying a DB instance to be a Multi-AZ DB instance deploymentTutorial: Restore an Amazon RDS DB instance from a DB snapshotFollow"
https://repost.aws/knowledge-center/rds-write-latency-spikes
How do I upgrade my Microsoft SQL cluster on my EC2 Windows instances?
I want to upgrade my Microsoft SQL cluster on my Amazon Elastic Compute Cloud (Amazon EC2) Windows instances. How do I do this?
"I want to upgrade my Microsoft SQL cluster on my Amazon Elastic Compute Cloud (Amazon EC2) Windows instances. How do I do this?Short descriptionUpgrading your Microsoft SQL cluster consists of the following three steps:Upgrade the Windows OS that SQL is running on.Upgrade the SQL DB engine version.Upgrade the SQL edition.ResolutionUpgrade the Windows OS that SQL is running onWhen upgrading the underlying OS, make sure that the version of Windows you're using is compatible with your SQL version. For more information, see How can I upgrade my EC2 Windows Server instance OS to a newer version?If your cluster is 2012R2 or later, use a cluster rolling upgrade to minimize downtime.Note: Make sure that you have your SQL licensing information for reference when upgrading.Upgrade the SQL version on your Windows instanceThere are two scenarios when upgrading SQL:Upgrade a failover cluster instance (FCI) or Availability Group (AG).For more information on upgrading an FCI, see Upgrade a failover cluster instance.For more information on upgrading an AG, see Upgrading Always On Availability Group replica instances.Migrate from an older version to newer version for FCI or AG: For more information, see Upgrade SQL Server instances running on Windows Server 2008/2008 R2/2012 clusters.Upgrade the SQL Server editionUpgrade SQL Server on a failover cluster instance. On SQL Server failover cluster instances, run edition upgrade on any one of the instance's active or passive nodes.Upgrade SQL Server on AG. On AG, both nodes must be upgraded separately. In the SQL Server Installation Center, make sure to select Maintenance, Edition Upgrade.For more information, see Upgrade to a different edition of SQL Server (Setup).Related informationHow do I troubleshoot Microsoft SQL issues on my EC2 Windows instance?How do I launch Microsoft SQL Server on an EC2 Windows instance?Follow"
https://repost.aws/knowledge-center/ec2-windows-instance-upgrade-sql-cluster
Why am I unable to perform elastic resize for my Amazon Redshift cluster?
"I tried to perform an elastic resize using AWS CloudFormation for my Amazon Redshift cluster. However, it performed a classic resize instead. Why is this happening?"
"I tried to perform an elastic resize using AWS CloudFormation for my Amazon Redshift cluster. However, it performed a classic resize instead. Why is this happening?ResolutionImportant: If you resized the cluster using the Amazon Redshift console, it isn't registered in the AWS CloudFormation template. Instead, use the AWS CloudFormation template to be sure that the numberofNodes parameter gets updated. Otherwise, Amazon Redshift might perform a classic resize, despite meeting the resize requirements. Amazon Redshift behaves this way when there are no changes to the node count since the last resize.Amazon Redshift performs a classic resize if any of these requirements aren't met:Only the numberofNodes parameter is modified.For dc2.large or ds2.xlarge node types, you can only double the node count or decrease the node count by half of the original cluster.For dc2.8xlarge or ds2.8xlarge node types, you can resize up to two times the original node count, or resize down to half of the original node count. For example, you can resize a 16 node cluster to any size that is between 8 and 32 nodes.The number of nodes cannot exceed the number of slices. The number of slices are determined when the Amazon Redshift cluster is launched. For example, if you launch a cluster with two dc2.large nodes, then there are four slices of the cluster. Therefore, you'll only be able to increase your node count to four nodes when using elastic resize.If your Amazon Redshift cluster performed a classic resize, investigate the following areas:Check the Amazon Redshift console to confirm the actual number of nodes in your cluster. Verify whether it matches the numberofNodes parameter in your AWS CloudFormation template.Use the DescribeClusters API to retrieve information from AWS CloudTrail and determine root cause analysis. Look for the elasticResizeNumberOfNodeOptions parameter in the AWS CloudTrail logs to verify whether your Amazon Redshift cluster is eligible for an elastic resize. If the parameter does not list an option to update node count, it confirms that the cluster slices failed to meet the elastic resize requirements.Note: Before updating the node count for your Amazon Redshift cluster, use the DescribeNodeConfigurationOptions API. The DescribeNodeConfigurationOptions API can help you determine the appropriate node configurations for an elastic resize such as node count and type.Related informationHow do I resize an Amazon Redshift cluster?Overview of managing clusters in Amazon RedshiftFollow"
https://repost.aws/knowledge-center/redshift-troubleshoot-elastic-resize
How can I troubleshoot GuardDuty custom Amazon SNS notifications that are not being delivered?
Why are my Amazon GuardDuty custom Amazon Simple Notification Service (Amazon SNS) notifications not being delivered?
"Why are my Amazon GuardDuty custom Amazon Simple Notification Service (Amazon SNS) notifications not being delivered?Short descriptionI followed the instructions to configure an Amazon EventBridge rule for GuardDuty to send custom SNS notifications if specific AWS service event types trigger. However, the SNS notifications weren't delivered.ResolutionFollow these instructions to confirm the correct settings for:Amazon SNS subscription confirmation.Amazon SNS topic AWS Identity and Access Management (IAM) access policy.AWS Key Management Service (AWS KMS) permissions.EventBridge event pattern JSON object finding type.Confirm the Amazon SNS subscriptionOpen the Amazon SNS console, and then choose Subscriptions.For your Amazon SNS subscription ID, verify that the status is Confirmed.If the status is Pending confirmation, follow the instructions to confirm the subscription.Confirm permissions for the SNS topic access policyOpen the Amazon SNS console, and then choose Topics.In Name, choose your Amazon SNS topic.In Details, choose the Access policy tab.Verify that the IAM policy allows permission to publish the events.amazonaws.com principal similar to the following:{ "Sid": "AWSEvents", "Effect": "Allow", "Principal": { "Service": "events.amazonaws.com" }, "Action": "sns:Publish", "Resource": "arn:aws:sns:YOUR-REGION:YOUR-ACCOUNT-ID:YOUR-SNS-TOPIC"}Confirm AWS Key Management Service (AWS KMS) permissionsOpen the AWS KMS console, and then choose Customer managed keys.In Key ID, choose your AWS KMS key.In Key policy, choose Switch to policy view.Verify that the KMS key policy allows permission to publish the events.amazonaws.com principal similar to the following:{ "Sid": "AWSEvents", "Effect": "Allow", "Principal": { "Service": "events.amazonaws.com" }, "Action": [ "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "*"}Confirm the EventBridge event pattern JSON object finding typeOpen the EventBridge console, and then choose Rules.In Name, choose your rule.In Event pattern, verify that the JSON object finding type matches the specific AWS service similar to the following:{ "source": [ "aws.guardduty" ], "detail-type": [ "GuardDuty Finding" ]}Related informationMonitoring your security with GuardDuty in real time with Amazon Elasticsearch Service (Amazon ES)GuardDuty finding typesFollow"
https://repost.aws/knowledge-center/guardduty-troubleshoot-sns
Why does my Amazon Athena query fail with the error "HIVE_BAD_DATA: Error parsing field value for field X: For input string: "12312845691""?
"When I query data in Amazon Athena, I get an error similar to one of the following:"HIVE_BAD_DATA: Error parsing field value for field X: For input string: "12312845691""HIVE_BAD_DATA: Error parsing column '0': target scale must be larger than source scale""
"When I query data in Amazon Athena, I get an error similar to one of the following:"HIVE_BAD_DATA: Error parsing field value for field X: For input string: "12312845691""HIVE_BAD_DATA: Error parsing column '0': target scale must be larger than source scale"Short descriptionThere are several versions of the HIVE_BAD_DATA error. If the error message specifies a null or empty input string (for example, "For input string: """), see Why do I get the error "HIVE_BAD_DATA: Error parsing field value '' for field X: For input string: """ when I query CSV data in Amazon Athena?Errors that specify an input string with a value occur under one of the following conditions:The data type defined in the table definition doesn't match the actual source data.A single field contains different types of data (for example, an integer value for one record and a decimal value for another record).ResolutionIt's a best practice to use only one data type in a column. Otherwise, the query might fail. To resolve errors, be sure that each column contains values of the same data type, and that the values are in the allowed ranges.If you still get errors, change the column's data type to a compatible data type that has a higher range. If you can't solve the problem by changing the data type,then try the solutions in the following examples.Example 1Source format: JSONIssue: In the last record, the "id" key value is "0.54," which is the DECIMAL data type. However, for the other records, the "id" key value is set to INT.Source data:{ "id" : 50, "name":"John" }{ "id" : 51, "name":"Jane" }{ "id" : 53, "name":"Jill" }{ "id" : 0.54, "name":"Jill" }Data Definition Language (DDL) statement:CREATE EXTERNAL TABLE jsontest_error_hive_bad_data ( id INT, name STRING)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'WITH SERDEPROPERTIES ( 'ignore.malformed.json' = 'true')LOCATION 's3://awsexamplebucket/jsontest_error_hive_bad_data/';Data Manipulation Language (DML) statement:SELECT * FROM jsontest_error_hive_bad_dataError:Your query has the following error(s):HIVE_BAD_DATA: Error parsing field value '0.54' for field 0: For input string: "0.54"This query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: bd50793b-94fc-42a7-b131-b7c81da273b2.To resolve this issue, redefine the "id" column as STRING. The STRING data type can correctly represent all values in this dataset. Example:CREATE EXTERNAL TABLE jsontest_error_hive_bad_data_correct_id_data_type ( id STRING, name STRING)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'WITH SERDEPROPERTIES ( 'ignore.malformed.json' = 'true')LOCATION 's3://awsexamplebucket/jsontest_error_hive_bad_data/';DML statement:SELECT * FROM jsontest_error_hive_bad_data_correct_id_data_typeYou can also CAST to the desired data type. For example, you can cast a string as an integer. However, depending on the data types that you are casting from and to, this might return null or inaccurate results. Values that can't be cast are discarded. For example, casting the string value "0.54" to INT returns null results:SELECT TRY_CAST(id AS INTEGER) FROM jsontest_error_hive_bad_data_correct_id_data_typeExample output:Results _col01 502 513 534The output shows that the value "0.54" was discarded. You can't cast that value directly from a string to an integer. To resolve this issue, use COALESCE to CAST the mixed type values in the same column as the output. Then, allow the aggregate function to run on the column. Example:SELECT COALESCE(TRY_CAST(id AS INTEGER), TRY_CAST(id AS DECIMAL(10,2))) FROM jsontest_error_hive_bad_data_correct_id_data_typeOutput:Results _col01 50.002 51.003 53.004 0.54Run aggregate functions:SELECT SUM(COALESCE(TRY_CAST(id AS INTEGER), TRY_CAST(id AS DECIMAL(10,2)))) FROM jsontest_error_hive_bad_data_correct_id_data_typeOutput:_col01 154.54Example 2Source format: JSONIssue: The "id" column is defined as INT. Athena couldn't parse "49612833315" because the range for INT values in Presto is -2147483648 to 2147483647.Source data:{ "id" : 50, "name":"John" }{ "id" : 51, "name":"Jane" }{ "id" : 53, "name":"Jill" }{ "id" : 49612833315, "name":"Jill" }DDL statement:CREATE EXTERNAL TABLE jsontest_error_hive_bad_data_sample_2 ( id INT, name STRING)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'WITH SERDEPROPERTIES ( 'ignore.malformed.json' = 'true')LOCATION 's3://awsexamplebucket/jsontest_error_hive_bad_data_2/';DML statement:SELECT * FROM jsontest_error_hive_bad_data_sample_2Error:Your query has the following error(s):HIVE_BAD_DATA: Error parsing field value '49612833315' for field 0: For input string: "49612833315"This query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 05b55fb3-481a-4012-8c0d-c27ef1ee746f.To resolve this issue, define the "id" column as BIGINT, which can read the value "49612833315." For more information, see Integer types.Modified DDL statement:CREATE EXTERNAL TABLE jsontest_error_hive_bad_data_sample_2_corrected ( id BIGINT, name STRING)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'WITH SERDEPROPERTIES ( 'ignore.malformed.json' = 'true')LOCATION 's3://awsexamplebucket/jsontest_error_hive_bad_data_2/';Example 3Source format: JSONIssue: The input data is DECIMAL and the column is defined as DECIMAL in the table definition. However, the scale is defined as 2, which doesn't match the "0.000054" value. For more information, see DECIMAL or NUMERIC type.Source data:{ "id" : 0.50, "name":"John" }{ "id" : 0.51, "name":"Jane" }{ "id" : 0.53, "name":"Jill" }{ "id" : 0.000054, "name":"Jill" }DDL statement:CREATE EXTERNAL TABLE jsontest_error_hive_bad_data_sample_3( id DECIMAL(10,2), name STRING)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'WITH SERDEPROPERTIES ( 'ignore.malformed.json' = 'true')LOCATION 's3://awsexamplebucket/jsontest_error_hive_bad_data_3/';DML statement:SELECT * FROM jsontest_error_hive_bad_data_sample_3Error:Your query has the following error(s):HIVE_BAD_DATA: Error parsing column '0': target scale must be larger than source scaleThis query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 1c3c7278-7012-48bb-8642-983852aff999.To resolve this issue, redefine the column with a scale that captures all input values. For example, instead of (10,2), use (10,7).CREATE EXTERNAL TABLE jsontest_error_hive_bad_data_sample_3_corrected( id DECIMAL(10,7), name STRING)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'WITH SERDEPROPERTIES ( 'ignore.malformed.json' = 'true')LOCATION 's3://awsexamplebucket/jsontest_error_hive_bad_data_3/';Follow"
https://repost.aws/knowledge-center/athena-hive-bad-data-parsing-field-value
How can I upgrade my Amazon Aurora MySQL-Compatible global database?
I want to perform either a minor or major version upgrade to my Amazon Aurora MySQL-Compatible Edition global database.
"I want to perform either a minor or major version upgrade to my Amazon Aurora MySQL-Compatible Edition global database.Short descriptionYou can perform either a minor or a major version upgrade to the Aurora clusters in your global database configuration. Upgrading an Aurora global database follows the same procedure as upgrading your individual Aurora MySQL-Compatible DB clusters. But, there are a few differences to be aware of for global clusters. This depends on the type of upgrade that you are performing and the type of Aurora DB cluster that you are using.Note: It's a best practice to test your applications on the upgraded version of Aurora. This applies to both minor and major version upgrades.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Perform a minor version upgradeAurora MySQL**-Compatible**Note: For all minor version upgrades, all secondary clusters must be upgraded before you upgrade the primary cluster.To perform minor version upgrades on your global cluster, follow the same process that you use for upgrading your individual Aurora clusters. Note that you can't apply automatic upgrades for clusters that are part of a global database.You can use these methods to update your Aurora MySQL-Compatible global cluster:Upgrade your Aurora MySQL cluster by modifying the engine version. To do this, you can either use the AWS CLI or the console. If using the AWS CLI, call modify-db-cluster, and specify the name and version of your DB cluster. Or, modify your cluster by using the Amazon RDS console.Apply any pending maintenance to your Aurora MySQL cluster (for versions older than 1.19.0).For more information, see Upgrading the minor version or patch level of an Aurora MySQL DB cluster.Aurora PostgreSQL-CompatibleThe Enable minor version upgrade option is automatically selected when you create a new Aurora PostgreSQL-Compatible Edition cluster. Unless you turn this off, any minor upgrades are made to your cluster automatically**.** Because the zero-downtime patching feature isn't available for global clusters, you might experience brief outages during the upgrade process.For more information, see How to perform minor version upgrades and apply patches.Perform a major version upgradeAurora MySQL-CompatibleDuring a major version upgrade, the global cluster containing all individual clusters is upgraded at once.Follow the steps in How the Aurora MySQL in-place major version upgrade works.Be sure to choose the global cluster and not one of the individual clusters. Selecting this choice means that all your clusters are upgraded at the same time, and not one by one. If you're using the Amazon RDS console to upgrade, then choose the item with a Global database role. If you're using the AWS CLI, then call the modify-global-cluster command instead of the modify-db-cluster command.After an upgrade, you can't backtrack to a time before the upgrade.To troubleshoot issues with your upgrade, see Troubleshooting for Aurora MySQL in-place upgrades.For more information on how to perform a major upgrade for Aurora MySQL-Compatible, see In-place major upgrades for global databases.Aurora PostgreSQL-CompatibleWhen you do a major upgrade on an Aurora PostgreSQL-Compatible cluster, it's a best practice to test your applications on the upgraded version. For more information, see Before upgrading your production DB cluster to a new major version.Check that your cluster has a recovery point object (RPO) set for the rds.global_db_rpo parameter before you upgrade. This parameter is turned off by default, but you can't perform a major upgrade without turning it on.After making sure that you've satisfied all the per-requisites, upgrade your Aurora PostgreSQL global cluster.For more information on how to perform major upgrades for Aurora PostgreSQL-Compatible, see Upgrading the PostgreSQL engine to a new major version.Related informationUsing Amazon Aurora global databasesFollow"
https://repost.aws/knowledge-center/aurora-mysql-global-cluster-upgrade
How do I check running queries for my Amazon RDS MySQL DB instance?
I need to see which queries are actively running on an Amazon Relational Database Service (Amazon RDS) DB instance that is running MySQL. How can I do this?
"I need to see which queries are actively running on an Amazon Relational Database Service (Amazon RDS) DB instance that is running MySQL. How can I do this?ResolutionTo see which queries are actively running for MySQL DB instance on Amazon RDS, follow these steps:1.    Connect to the DB instance running the MySQL.2.    Run the following command:SHOW FULL PROCESSLIST\GNote: If you don't use the FULL keyword, only the first 100 characters of each statement are shown in the Info field.3.    Or, run the following query to retrieve the same result set:SELECT * FROM INFORMATION_SCHEMA.PROCESSLISTNote: Your user account must be granted the administration privilege for the MySQL PROCESS server to see all the threads running on an instance of MySQL. Otherwise, SHOW PROCESSLIST shows only the threads associated that are with the MySQL account that you're using. Also note that SHOW FULL PROCESSLIST and INFORMATION_SCHEMA.PROCESSLIST statements can negatively affect performance because they require a mutex.Related informationMySQL documentation for The MySQL Command-Line ClientMySQL documentation for --tee=file_nameMySQL documentation for MySQL WorkbenchMySQL documentation for The INFORMATION_SCHEMA PROCESSLIST TableFollow"
https://repost.aws/knowledge-center/rds-mysql-running-queries
How do I resolve the error "CloudFront wasn't able to connect to the origin"?
"I'm using Amazon CloudFront to serve content, but my users are receiving the HTTP 502 error "CloudFront wasn't able to connect to the origin." What's causing this error?"
"I'm using Amazon CloudFront to serve content, but my users are receiving the HTTP 502 error "CloudFront wasn't able to connect to the origin." What's causing this error?ResolutionHTTP 502 errors from CloudFront can occur because of the following reasons:There's an SSL negotiation failure because the origin is using SSL/TLS protocols and ciphers that aren't supported by CloudFront.There's an SSL negotiation failure because the SSL certificate on the origin is expired or invalid, or because the certificate chain is invalid.There's a host header mismatch in the SSL negotiation between your CloudFront distribution and the custom origin.The custom origin isn't responding on the ports specified in the origin settings of the CloudFront distribution.The custom origin is ending the connection to CloudFront too quickly.For detailed instructions on how to troubleshoot these issues, see HTTP 502 status code (Bad Gateway).Related informationTroubleshooting error responses from your originHow do I troubleshoot a 502: "The request could not be satisfied" error from CloudFront?Follow"
https://repost.aws/knowledge-center/resolve-cloudfront-connection-error
How do I avoid RequestLimitExceeded errors when I use PowerShell to programmatically launch multiple Amazon EC2 instances?
"When I use PowerShell to launch multiple Amazon Elastic Compute Cloud (Amazon EC2) instances, I sometimes receive RequestLimitExceeded errors."
"When I use PowerShell to launch multiple Amazon Elastic Compute Cloud (Amazon EC2) instances, I sometimes receive RequestLimitExceeded errors.ResolutionA RequestLimitExceeded error for Amazon EC2 APIs usually indicates request rate limiting or resource rate limiting API throttling. You can use a combination of retry logic and exponential backoff strategies to work around this issue.Launching an Amazon EC2 instance is a mutating call, and is subject to both request rate and resource rate limiting. The script that you use to launch the instances must accommodate the refill rate of the token bucket.Use one of the following delayed invocation or retry strategies to avoid RequestLimitExceeded errors.Note: AWS SDK for .NET has a built-in retry mechanism that is turned on by default. To customize the timeouts, see Retries and timeouts.The following example includes a delayed invocation mechanism for your requests. The delayed invocation allows the request bucket to fill up:# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.# SPDX-License-Identifier: MIT-0# Example Code to launch 50 EC2 instances of type 'm5a.large'.try { $params = @{ ImageId = '<AMI_ID>' InstanceType = 'm5a.large' AssociatePublicIp = $false SubnetId = '<Subnet_ID>' MinCount = 10 MaxCount = 10 } for ($i=0;$i<=5;$i++){ $instance = New-EC2Instance @params Start-Sleep 5000 #Sleep for 5 seconds to allow Request bucket to refill at the rate of 2 requests per second }} catch { Write-Error "An Exception Occurred!"}The following example includes retry logic in the script:# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.# SPDX-License-Identifier: MIT-0#Example Code to launch 50 EC2 instances of type 'm5a.large'.$Stoploop = $false[int] $Retrycount = "0"do { try { $params = @{ ImageId = '<AMI_ID>' InstanceType = 'm5a.large' AssociatePublicIp = $false SubnetId = '<Subnet_ID>' MinCount = 50 MaxCount = 50 } $instance = New-EC2Instance @params $Stoploop = $true } catch { if ($Retrycount -gt 3) { Write - Host "Could not complete request after 3 retries." $Stoploop = $true } else { Write-Host "Could not complete request retrying in 5 seconds." Start-Sleep -Seconds 25 #25 seconds of sleep allows for 50 request tokens to be refilled at the rate of 2/sec $Retrycount = $Retrycount + 1 } } } While($Stoploop -eq $false)Related informationRequest throttling for the Amazon EC2 APIRetry behaviorFollow"
https://repost.aws/knowledge-center/ec2-launch-multiple-requestlimitexceeded
How do I troubleshoot issues creating an AWS Fargate profile?
"I have questions about creating an AWS Fargate profile. Or, I'm having issues creating an AWS Fargate profile. How can I troubleshoot this?"
"I have questions about creating an AWS Fargate profile. Or, I'm having issues creating an AWS Fargate profile. How can I troubleshoot this?Short descriptionA Fargate profile is a mechanism to specify which pods should be scheduled on Fargate nodes in an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.A Fargate profile has selectors that are matched with every incoming pod specification YAML file. If the match is successful and AWS Fargate considerations are met, then the pod is scheduled on Fargate nodes using subnets and the AWS Identity and Access Management (IAM) role specified in the Fargate profile.Some of the pod placement rules are as follows:If you configured both namespace and match labels for your pod selectors:The Fargate workflow considers your pod to be matched with a Fargate profile only if both the conditions (namespace and labels) match the pod specification.If you have specified multiple pod selectors within a single Fargate profile:When a pod matches either of these selectors, it's scheduled by the fargate-scheduler on a Fargate node using data from the matched Fargate profile.If a pod specification matches with multiple Fargate profiles:The pod is scheduled according to a random Fargate profile, unless the following annotation is specified within the pod specification: eks.amazonaws.com/fargate-profile:<fp_name>.Note the following limits when creating a Fargate profile:You can create up to ten Fargate profiles per cluster.You can have up to five selectors per Fargate profile.You can have up to five label pairs per selector.ResolutionThe following are common scenarios and issues encountered when creating a Fargate profile:How can I create a Fargate profile to schedule pods on Fargate nodes?You can use the Amazon EKS console, AWS Command Line Interface (AWS CLI), SDK, or API (Cloudformation/eksctl, and so on) to create a Fargate profile.How can I create a Fargate profile using AWS CloudFormation?You can use the AWS::EKS::FargateProfile CloudFormation resource type to create a Fargate profile.However, if you're not creating Amazon Elastic Compute Cloud (Amazon EC2) node groups along with Fargate nodes, then coredns add-ons have the following annotation by default:eks.amazonaws.com/compute-type : ec2To change the coredns annotations, you must patch your deployment externally. You can do this from the terminal where you manage your EKS cluster. Or, you can use a CloudFormation custom resource to automate this process based on your use case.Note: It's a best practice to use eksctl to create/update EKS clusters because it simplifies cluster resource administration.What is the pod execution role that must be included with the Fargate profile?The pod execution role is an IAM role that's used by the Fargate node to make AWS API calls. These include calls made to fetch Amazon Elastic Container Registry (Amazon ECR) images such as VPC CNI, CoreDNS, and so on. The AmazonEKSFargatePodExecutionRolePolicy managed policy must be attached to this role.Kubelet on the Fargate node uses this IAM role to communicate with the API server. This role must be included in the aws-auth configmap so that kubelet can authenticate with the API server. When you create a Fargate profile, the Fargate workflow automatically adds this role to the cluster's aws-auth configmap.If your Fargate nodes are showing as 'Not Ready', then make sure that the pod execution role is included in aws-auth.The following is a sample aws-auth mapRoles snippet after creating a Fargate profile with a pod execution role:mapRoles: | - groups: - system:bootstrappers - system:nodes - system:node-proxier rolearn: <Pod_execution_role_ARN> username: system:node:{{SessionName}}If the aws-auth configmap is altered after creating the Fargate profile, then you might receive the following warning while scheduling pods on Fargate nodes:Pod provisioning timed out (will retry) for pod: <pod_nginx>I'm planning to migrate workloads to EKS Fargate. How do I create subnets and security groups for usage?EKS Fargate currently supports only private subnets. This means that there isn't a default route to the internet gateway within the route tables attached to the subnets specified within your Fargate profile. So, you can have either a NAT gateway or VPC endpoints configured for the subnets that you intend to use for the Fargate profile.The cluster security group is by default attached to Fargate nodes. You don't need to provision a security group specifically for this purpose. Also, if using VPC endpoints for your subnets, make sure that the cluster has private endpoint access activated.Make sure that the security group attached to the VPC endpoints has an inbound rule allowing HTTPS port 443 traffic from the cluster's VPC CIDR.I'm creating Fargate profiles using an API-based provisioner such as Terraform or AWS CloudFormation. Why are my Fargate profiles going into the CREATE_FAILED state.Only one Fargate profile can be created or deleted at a time. So, if you're deleting a Fargate profile, then no other Fargate profiles can be created or deleted at the same time.While using an API-based provisioner, such as CloudFormation, make sure that the creation of a Fargate profile starts after all other Fargate profiles are created successfully. You can create a chain-like hierarchy among Fargate profiles using the DependsOn attribute so that creation and deletion is sequential. If the requests aren't sequential, then you might receive an error similar to the following:Cannot create Fargate Profile <fp_name1> because cluster <cluster_name> currently has Fargate profile <fp_name2> in status CREATINGCan I specify the resources (CPU, memory) to be provisioned for Fargate nodes within the Fargate profile?You can't directly specify the amount of resources to be provisioned within the Fargate profile. It's a best practice to specify resource requests within your Fargate pod specification YAML file. Doing so helps the Fargate workflow assign at least that amount of resources for the pod.The amount of vCPU or memory you see after running the kubectl describe node <node_name> command might not be the same as the amount of CPU or memory that you requested for the pod. The amount of memory and CPU the node has depends on the available capacity in the Fargate resource allocation pool. You're billed according to the amount that you requested within your pod specification. You aren't billed for the amount of resources visible with kubectl.CPU and memory is always provisioned and billed by AWS Fargate as discrete combinations. The billable amount includes utilization by components other than the pod running on the node, such as kubelet, kube-proxy, and so on. For example, if you request one vCPU and eight GB memory for your pod, then you're billed for the next higher combination of two vCPU and nine GB memory. This accounts for the resources used by kubelet and other Kubernetes components on the node. For more information, see Pod CPU and memory.Follow"
https://repost.aws/knowledge-center/fargate-troubleshoot-profile-creation
What is Amazon S3 Glacier provisioned capacity?
I want to use Amazon Simple Storage Service Glacier provisioned capacity.
"I want to use Amazon Simple Storage Service Glacier provisioned capacity.ResolutionWith Amazon S3 Glacier provisioned capacity units, you can pay a fixed upfront fee for a given month to ensure the availability of retrieval capacity for expedited retrievals from Amazon S3 Glacier vaults. You can purchase multiple provisioned capacity units per month to increase the amount of data you can retrieve. For information on the maximum number of provisioned capacity units that you can purchase per account, see Service quotas.Important: Most use cases don't require the purchase of provisioned capacity units. Purchase provisioned capacity units only if the following are true:You have a lot of data stored in Amazon S3 Glacier.You plan to frequently retrieve large amounts of data from Amazon S3 Glacier in a given month.Your use case demands that the data you retrieve is accessible within minutes.Related informationAccessing Amazon S3 GlacierAmazon S3 Glacier pricingFollow"
https://repost.aws/knowledge-center/glacier-provisioned-capacity
Why is the restore of my Amazon DynamoDB table taking a long time?
"I'm trying to restore my Amazon DynamoDB table, but the restore process is taking a long time."
"I'm trying to restore my Amazon DynamoDB table, but the restore process is taking a long time.ResolutionWhen you restore your DynamoDB table from its backup, the restore process is completed in less than an hour in most cases. However, restore times are directly related to the configuration of your table, such as the size of your table, number of underlying partitions, and other related variables. When you plan disaster discovery, it's a best practice to regularly document average restore completion times and establish how these times affect your overall Recovery Time Objective.The time DynamoDB takes to restore a table varies based on multiple factors and is not necessarily correlated directly to the size of the table. If your table contains data with significant skew and includes secondary indexes, then the time to restore might increase. When the restore process is in progress, the table status is Restoring. When restore is completed, the table shows the status as Active. All backups in DynamoDB work without consuming any provisioned throughput on the table.Keep the following in mind when you are restoring a DynamoDB table from its backup:The restore times aren't always correlated directly to the size of the table.When you perform a point-in-time recovery of a DynamoDB table, the restore process takes at least 20 minutes irrespective of the size of the table. This is because after you restore, DynamoDB needs time to provision all resources to create the new table and initiate the restore process to copy the actual data.If data in the table is evenly distributed, then the restoration time is proportional to the largest single partition by item count.If the data is skewed, the restoration time might increase due to potential hot key throttling. For example, if your table’s primary key is using the month of the year for partitioning, and all your data is from the month of December, then you have skewed data.Restoring a table can be faster and more cost efficient if you exclude secondary indexes from being created.Restoration times for two different tables of different schemas and data can't be compared. This is because the restoration time for a table depends upon the data skewness on partition level.Related informationUsing DynamoDB backup and restoreFollow"
https://repost.aws/knowledge-center/dynamodb-restore-taking-long