Question
stringlengths
0
222
Description
stringlengths
0
790
Answer
stringlengths
0
28.2k
Link
stringlengths
35
92
How do I troubleshoot SMS delivery delays in Amazon SNS?
I get delivery delays of mobile text messaging (SMS) to destination numbers in Amazon Simple Notification Service (Amazon SNS).
"I get delivery delays of mobile text messaging (SMS) to destination numbers in Amazon Simple Notification Service (Amazon SNS).Short descriptionSMS delivery could be delayed for the following reasons:The phone number is temporarily out of the coverage area.The phone number is in a roaming network.There's increased network traffic for a particular carrier.The phone was turned off when a carrier tried delivering the message.ResolutionTroubleshoot single device issuesRestart the device so that it's connected to the nearest network base station.Change the SIM slot to check for a device issue.Check if the device can receive SMS messages from other sources.Troubleshoot multiple device issuesIf delayed SMS delivery is affecting multiple devices, there could be issues with downstream providers and carriers.To troubleshoot potential downstream issues, create a support case for Amazon SNS. Provide the following information in your support case:The AWS Region you're using to send SMS messagesA timestamp of when the issue startedThree samples of SMS logs with the message IDs of failed SMS messages to different numbers not older than three daysNote: SMS deliveries from Amazon CloudWatch logs don't always provide accurate SMS delivery times. In some cases, SMS messages can be delivered before CloudWatch logs are received. The dwellTimeMsUntilDeviceAck value in the delivery logs shows when the carrier received the Delivery Report (DLR), but doesn't provide information on delayed SMS messages.Follow"
https://repost.aws/knowledge-center/sns-sms-delivery-delays
How do I stream data from CloudWatch Logs to a VPC-based Amazon OpenSearch Service cluster in a different account?
"I'm trying to stream data from Amazon CloudWatch Logs to an Amazon OpenSearch Service cluster using a virtual private cloud (VPC) in another account. However, I receive an "Enter a valid Amazon OpenSearch Service Endpoint" error message."
"I'm trying to stream data from Amazon CloudWatch Logs to an Amazon OpenSearch Service cluster using a virtual private cloud (VPC) in another account. However, I receive an "Enter a valid Amazon OpenSearch Service Endpoint" error message.Short descriptionTo stream data from CloudWatch Logs to an OpenSearch Service cluster in another account, perform the following steps:1.    Set up CloudWatch Logs in Account A.2.    Configure AWS Lambda in Account A.3.    Configure Amazon Virtual Private Cloud (Amazon VPC) peering between accounts.ResolutionSet up CloudWatch Logs in Account A1.    Open the CloudWatch Logs console in Account A and select your log group.2.    Choose Actions.3.    Choose the Create OpenSearch subscription filter.4.    For the Select Account option, select This account.5.    For the OpenSearch Service cluster dropdown list, choose an existing cluster for Account A.6.    Choose the Lambda IAM Execution Role that has permissions to make calls to the selected OpenSearch Service cluster.7.    Attach the AWSLambdaVPCAccessExecutionRole policy to your role.8.    In Configure log format and filters, select your Log Format and Subscription Filter Pattern.9.    Choose Next.10.    Enter the Subscription filter name and choose Start Streaming. For more information about streaming, see Streaming CloudWatch Logs data to Amazon OpenSearch Service.Configure Lambda in Account A1.    In Account A, open the Lambda console.2.    Select the Lambda function you created to stream the log.3.    In the function code, update the endpoint variable of the OpenSearch Service cluster in Account B. This update allows the Lambda function to send data to the OpenSearch Service domain in Account B.4.    Choose Configuration.5.    Choose VPC.6.    Under VPC, choose Edit.7.    Select your VPC, subnets, and security groups.Note: This selection makes sure that the Lambda function runs inside a VPC, using VPC routing to send data back to the OpenSearch Service domain. For more information about Amazon Virtual Private Cloud (Amazon VPC) configurations, see Configuring a Lambda function to access resources in a VPC.8.    Choose Save.Configure VPC peering between accounts1.    Open the Amazon VPC console in Account A and Account B.Note: Be sure that your VPC doesn't have overlapping CIDR blocks.2.    Create a VPC peering session between the two custom VPCs (Lambda and OpenSearch Service). This VPC peering session allows Lambda to send data to your OpenSearch Service domain. For more information about VPC peering connections, see Create a VPC peering connection.3.    Update the route table for both VPCs. For more information about route tables, see Update your route tables for a VPC peering connection.4.    In Account A, go to Security Groups.5.    Select the security group assigned to the subnet where Lambda is set up.Note: In this instance, "security group" refers to a subnet network ACL.6.    Add the inbound rule to allow traffic from the OpenSearch Service subnets.7.    In Account B, select the security group assigned to the subnet where OpenSearch Service is set up.8.    Add the inbound rule to allow traffic from the Lambda subnets.9.    In Account B, open the OpenSearch Service console.10.    Choose Actions.11.    Choose modify access policy, and then append the following policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<AWS Account A>:role/<Lambda Execution Role>" }, "Action": "es:*", "Resource": "arn:aws:es:us-east-1: ::<AWS Account B>:domain/<OpenSearch Domain Name>/*" } ]}This policy allows OpenSearch Service to make calls from the Lambda function's execution role.12.    Check the Error count and success rate metric in the Lambda console. This metric verifies whether logs are successfully delivered to OpenSearch Service.13.    Check the Indexing rate metric in OpenSearch Service to confirm whether the data was sent. CloudWatch Logs now streams across both accounts in your Amazon VPC.Follow"
https://repost.aws/knowledge-center/opensearch-stream-data-cloudwatch
How does throttling on my global secondary index affect my Amazon DynamoDB table?
My global secondary index (GSI) is being throttled. How does this affect the base Amazon DynamoDB table?
"My global secondary index (GSI) is being throttled. How does this affect the base Amazon DynamoDB table?Short descriptionThrottling on a GSI affects the base table in different ways, depending on whether the throttling is for read or for write activity:When a GSI has insufficient read capacity, the base table isn't affected.When a GSI has insufficient write capacity, write operations don't succeed on the base table or any of its GSIs.For more information, see Using global secondary indexes in DynamoDB.ResolutionTo prevent throttling, do the following:Be sure that the provisioned write capacity for each GSI is equal to or greater than the provisioned write capacity of the base table. To modify the provisioned throughput of a GSI, use the UpdateTable operation. If automatic scaling is turned on for the base table, then it's a best practice to apply the same settings to the GSI. You can do this by choosing Copy from base table in the DynamoDB console. For best performance, be sure to turn on Use the same read/write capacity settings for all global secondary indexes. This option allows DynamoDB auto scaling to uniformly scale all the global secondary indexes on the base table. For more information, see Enabling DynamoDB auto scaling on existing tables.Be sure that the GSI's partition key distributes read and write operations as evenly as possible across partitions. This helps prevent hot partitions, which can lead to throttling. For more information, see Designing partition keys to distribute your workload evenly.Use Amazon CloudWatch Contributor Insights for DynamoDB to identify the most frequently throttled keys.Follow"
https://repost.aws/knowledge-center/dynamodb-gsi-throttling-table
How do I troubleshoot Amazon Kinesis Agent issues on a Linux machine?
"I'm trying to use Amazon Kinesis Agent on a Linux machine. However, I'm encountering an issue. How do I resolve this?"
"I'm trying to use Amazon Kinesis Agent on a Linux machine. However, I'm encountering an issue. How do I resolve this?Short descriptionThis article covers the following issues:Kinesis Agent is sending duplicate events.Kinesis Agent is causing write throttles and failed records on my Amazon Kinesis stream.Kinesis Agent is unable to read or stream log files.My Amazon Elastic Computing (Amazon EC2) server keeps failing because of insufficient Java heap size.My Amazon EC2 CPU utilization is very high.ResolutionKinesis Agent is sending duplicate eventsIf you receive duplicates whenever you send logs from Kinesis Agent, there's likely a file rotation in place where the match pattern isn't correctly qualified. Whenever you send a log, Kinesis Agent checks the latestUpdateTimestamp of each file that matches the file pattern. By default, Kinesis Agent chooses the most recently updated file, identifying an active file that matches the rotation pattern. If more than one file is updated at the same time, Kinesis Agent can't determine the active file to track. Therefore, Kinesis Agent begins to tail the updated files from the beginning, causing several duplicates.To avoid this issue, create different file flows for each individual file, making sure that your file pattern tracks the rotations instead.Note: If you're tracking a rotation, it's a best practice to use either the create or rename log rotate settings, instead of copytruncate.For example, you can use a file flow that's similar to this one:"flows": [ { "filePattern": "/tmp/app1.log*", "kinesisStream": "yourkinesisstream1" }, { "filePattern": "/tmp/app2.log*", "kinesisStream": "yourkinesisstream2" } ]Kinesis Agent also retries any records that it fails to send back when there are intermittent network issues. If Kinesis Agent fails to receive server-side acknowledgement, it tries again, creating duplicates. In this example, the downstream application must de-duplicate.Duplicates can also occur when the checkpoint file is tempered or removed. If a checkpoint file is stored in /var/run/aws-kinesis-agent, then the file might get cleaned up during a reinstallation or instance reboot. When you run Kinesis Agent again, the application fails as soon as the file is read, causing duplicates. Therefore, keep the checkpoint in the main Agent directory and update the Kinesis Agent configuration with a new location.For example:"checkpointFile": "/aws-kinesis-agent-checkpoints/checkpoints"Kinesis Agent is causing write throttles and failed records on my Amazon Kinesis data streamBy default, Kinesis Agent tries to send the log files as quickly as possible, breaching Kinesis' throughput thresholds. However, failed records are re-queued, and are continuously retried to prevent any data loss. When the queue is full, Kinesis Agent stops tailing the file, which can cause the application to lag.For example, if the queue is full, your log looks similar to this:com.amazon.kinesis.streaming.agent.Agent [WARN] Agent: Tailing is 745.005859 MB (781195567 bytes) behind.Note: The queue size is determined by the publishQueueCapacity parameter (with the default value set to "100").To investigate any failed records or performance issues on your Kinesis data stream, try the following:Monitor the RecordSendErrors metric in Amazon CloudWatch.Review your Kinesis Agent logs to check if any lags occurred. The ProvisionedThroughputExceededException entry is visible only under the DEBUG log level. During this time, Kinesis Agent's record sending speed can be slower if most of the CPU is used to parse and transform data.If you see that Kinesis Agent is falling behind, then consider scaling up your Amazon Kinesis delivery stream.Kinesis Agent is unable to read or stream log filesMake sure that the Amazon EC2 instance that your Kinesis Agent is running on has proper permissions to access your destination Kinesis delivery stream. If Kinesis Agent fails to read the log file, then check whether Kinesis Agent has read permissions for that file. For all files matching this pattern, read permission must be granted to aws-kinesis-agent-user. For the directory containing the files, read and execute permissions must also be granted to aws-kinesis-agent-user. Otherwise, you get an Access Denied error or Java Runtime Exception.My Amazon EC2 server keeps failing because of insufficient Java heap sizeIf your Amazon EC2 server keeps failing because of insufficient Java heap size, then increase the heap size allotted to Amazon Kinesis Agent. To configure the amount of memory available to Kinesis Agent, update the “start-aws-kinesis-agent” file. Increase the set values for the following parameters:JAVA_START_HEAPJAVA_MAX_HEAPNote: On Linux, the file path for “start-aws-kinesis-agent” is “/usr/bin/start-aws-kinesis-agent”.My Amazon EC2 CPU utilization is very highCPU utilization can spike if Kinesis Agent is performing sub-optimized regex pattern matching and log transformation. If you already configured Kinesis Agent, try removing all the regular expression (regex) pattern matches and transformations. Then, check whether you're still experiencing CPU issues.If you still experience CPU issues, then consider tuning the threads and records that are buffered in memory. Or, update some of the default parameters in the /etc/aws-kinesis/agent.json configuration settings file. You can also lower several parameters in the Kinesis Agent configuration file.Here are the general configuration parameters that you can try lowering:sendingThreadsMaxQueueSize: The workQueue size of the threadPool for sending data to destination. The default value is 100.maxSendingThreads: The number of threads for sending data to destination. The minimum value is 2. The default value is 12 times the number of cores for your machine.maxSendingThreadsPerCore: The number of threads per core for sending data to destination. The default value is 12.Here are the flow configuration parameters that you can try lowering:publishQueueCapacity: The maximum number of buffers of records that can be queued up before they are sent to the destination. The default value is 100.minTimeBetweenFilePollsMillis: The time interval when the tracked file is polled and the new data begins to parse. The default value is 100.Follow"
https://repost.aws/knowledge-center/troubleshoot-kinesis-agent-linux
How do I revert to a known stable kernel after an update prevents my Amazon EC2 instance from rebooting successfully?
How do I revert to a stable kernel after an update prevents my Amazon Elastic Compute Cloud (Amazon EC2) instance from rebooting successfully?
"How do I revert to a stable kernel after an update prevents my Amazon Elastic Compute Cloud (Amazon EC2) instance from rebooting successfully?Short descriptionIf you performed a kernel update to your EC2 Linux instance but the kernel is now corrupt, then the instance can't reboot. You can't use SSH to connect to the impaired instance.To revert to the previous versions, do the following:1.    Access the instance's root volume.2.    Update the default kernel in the GRUB bootloader.ResolutionAccess the instance's root volumeThere are two methods to access the root volume:Method 1: Use the EC2 Serial ConsoleIf you enabled EC2 Serial Console for Linux, then you can use it to troubleshoot supported Nitro-based instance types. The serial console helps you troubleshoot boot issues, network configuration, and SSH configuration issues. The serial console connects to your instance without the need for a working network connection. You can access the serial console using the Amazon EC2 console or the AWS Command Line Interface (AWS CLI).Before using the serial console, grant access to it at the account level. Then create AWS Identity and Access Management (IAM) policies granting access to your IAM users. Also, every instance using the serial console must include at least one password-based user. If your instance is unreachable and you haven’t configured access to the serial console, then follow the instructions in Method 2. For information on configuring the EC2 Serial Console for Linux, see Configure access to the EC2 Serial Console.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Method 2: Use a rescue instanceCreate a temporary rescue instance, and then remount your Amazon Elastic Block Store (Amazon EBS) volume on the rescue instance. From the rescue instance, you can configure your GRUB to take the previous kernel for booting.Important: Don't perform this procedure on an instance store-backed instance. Because the recovery procedure requires a stop and start of your instance, any data on that instance is lost. For more information, see Determine the root device type of your instance.1.    Create an EBS snapshot of the root volume. For more information, see Create Amazon EBS snapshots.2.    Open the Amazon EC2 console.Note: Be sure that you're in the correct Region.3.    Select Instances from the navigation pane, and then choose the impaired instance.4.    Choose Instance State, Stop instance, and then select Stop.5.    In the Storage tab, under Block devices, select the Volume ID for /dev/sda1 or /dev/xvda.Note: The root device differs by AMI, but /dev/xvda or /dev/sda1 are reserved for the root device. For example, Amazon Linux 1 and 2 use /dev/xvda. Other distributions, such as Ubuntu 14, 16, 18, CentOS 7, and RHEL 7.5, use /dev/sda1.6.    Choose Actions, Detach Volume, and then select Yes, Detach. Note the Availability Zone.Note: You can tag the EBS volume before detaching it to help identify it in later steps.7.    Launch a rescue EC2 instance in the same Availability Zone.Note: Depending on the product code, you might be required to launch an EC2 instance of the same OS type. For example, if the impaired EC2 instance is a paid RHEL AMI, you must launch an AMI with the same product code. For more information, see Get the product code for your instance.If the original instance is running SELinux (RHEL, CentOS 7 or 8, for example), launch the rescue instance from an AMI that uses SELinux. If you select an AMI running a different OS, such as Amazon Linux 2, any modified file on the original instance has broken SELinux labels.8.    After the rescue instance launches, choose Volumes from the navigation pane, and then choose the detached root volume of the impaired instance.9.    Choose Actions, Attach Volume.10.    Choose the rescue instance ID ( id-xxxxx), and then set an unused device. In this example, /dev/sdf.11.     Use SSH to connect to the rescue instance.12.    Run the lsblk command to view your available disk devices:lsblkThe following is an example of the output:NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTxvda 202:0 0 15G 0 disk└─xvda1 202:1 0 15G 0 part /xvdf 202:0 0 15G 0 disk └─xvdf1 202:1 0 15G 0 partNote: Nitro-based instances expose EBS volumes as NVMe block devices. The output generated by the lsblk command on Nitro-based instances shows the disk names as nvme[0-26]n1. For more information, see Amazon EBS and NVMe on Linux instances. The following is an example of the lsblk command output on a Nitro-based instance:NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTnvme0n1 259:0 0 8G 0 disk └─nvme0n1p1 259:1 0 8G 0 part /└─nvme0n1p128 259:2 0 1M 0 part nvme1n1 259:3 0 100G 0 disk └─nvme1n1p1 259:4 0 100G 0 part /13.    Run the following command to become root:sudo -i14.    Mount the root partition of the mounted volume to /mnt. In the preceding example, /dev/xvdf1 or /dev/nvme1n1p1is the root partition of the mounted volume. For more information, see Make an Amazon EBS volume available for use on Linux. Note, in the following example, replace /dev/xvdf1 with the correct root partition for your volume.mount -o nouuid /dev/xvdf1 /mntNote: If /mnt doesn't exist on your configuration, create a mount directory, and then mount the root partition of the mounted volume to this new directory. mkdir /mntmount -o nouuid /dev/xvdf1 /mntYou can now access the data of the impaired instance through the mount directory.15.    Mount /dev, /run, /proc, and /sys of the rescue instance to the same paths as the newly mounted volume:for m in dev proc run sys; do mount -o bind {,/mnt}/$m; doneCall the chroot function to change into the mount directory.Note: If you have a separate /boot partition, mount it to /mnt/boot before running the following command.chroot /mntUpdate the default kernel in the GRUB bootloaderThe current corrupt kernel is in position 0 (zero) in the list. The last stable kernel is in position 1. To replace the corrupt kernel with the stable kernel, use one of the following procedures, based on your distro:GRUB1 (Legacy GRUB) for Red Hat 6 and Amazon LinuxGRUB2 for Ubuntu 14 LTS, 16.04 and 18.04GRUB2 for RHEL 7 and Amazon Linux 2GRUB2 for RHEL 8 and CentOS 8GRUB1 (Legacy GRUB) for Red Hat 6 and Amazon Linux 1Use the sed command to replace the corrupt kernel with the stable kernel in the /boot/grub/grub.conf file:sed -i '/^default/ s/0/1/' /boot/grub/grub.confGRUB2 for Ubuntu 14 LTS, 16.04, and 18.041.    Replace the corrupt GRUB_DEFAULT=0 default menu entry with the stable GRUB_DEFAULT=saved value in the /etc/default/grub file:sed -i 's/GRUB_DEFAULT=0/GRUB_DEFAULT=saved/g' /etc/default/grub2.    Run the update-grub command so that GRUB recognizes the change:update-grub3.    Run the grub-set-default command so that the stable kernel loads at the next reboot. In this example, grub-set-default is set to 1 in position 0:grub-set-default 1GRUB2 for RHEL 7 and Amazon Linux 21.    Replace the corrupt GRUB_DEFAULT=0 default menu entry with the stable GRUB_DEFAULT-saved value in the /etc/default/grub file:sed -i 's/GRUB_DEFAULT=0/GRUB_DEFAULT=saved/g' /etc/default/grub2.    Update GRUB to regenerate the /boot/grub2/grub.cfg file:grub2-mkconfig -o /boot/grub2/grub.cfg3.    Run the grub2-set-default command so that the stable kernel loads at the next reboot. In this example grub2-set-default is set to 1 in position 0:grub2-set-default 1GRUB2 for RHEL 8 and CentOS 8GRUB2 in RHEL 8 and CentOS 8 uses blscfg files and entries in /boot/loader for the boot configuration, instead of the previous grub.cfg format. It's a best practice to use the grubby tool for managing the blscfg files and retrieving information from the /boot/loader/entries/. If the blscfg files are missing from this location or corrupted, grubby doesn't show any results. You must regenerate the files to recover functionality. Therefore, the indexing of the kernels depends on the .conf files located under /boot/loader/entries and on the kernel versions. Indexing is configured to keep the latest kernel with the lowest index. For information on how to regenerate BLS configuration files, see How can I recover my Red Hat 8 or CentOS 8 instance that is failing to boot due to issues with the Grub2 BLS configuration file?1.    Run the grubby --default-kernel command to see the current default kernel:grubby --default-kernel2.    Run the grubby --info=ALL command to see all available kernels and their indexes:grubby --info=ALLThe following is example output from the --info=ALL command:root@ip-172-31-29-221 /]# grubby --info=ALLindex=0kernel="/boot/vmlinuz-4.18.0-305.el8.x86_64"args="ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto $tuned_params"root="UUID=d35fe619-1d06-4ace-9fe3-169baad3e421"initrd="/boot/initramfs-4.18.0-305.el8.x86_64.img $tuned_initrd"title="Red Hat Enterprise Linux (4.18.0-305.el8.x86_64) 8.4 (Ootpa)"id="0c75beb2b6ca4d78b335e92f0002b619-4.18.0-305.el8.x86_64"index=1kernel="/boot/vmlinuz-0-rescue-0c75beb2b6ca4d78b335e92f0002b619"args="ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto"root="UUID=d35fe619-1d06-4ace-9fe3-169baad3e421"initrd="/boot/initramfs-0-rescue-0c75beb2b6ca4d78b335e92f0002b619.img"title="Red Hat Enterprise Linux (0-rescue-0c75beb2b6ca4d78b335e92f0002b619) 8.4 (Ootpa)"id="0c75beb2b6ca4d78b335e92f0002b619-0-rescue"index=2kernel="/boot/vmlinuz-4.18.0-305.3.1.el8_4.x86_64"args="ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto $tuned_params"root="UUID=d35fe619-1d06-4ace-9fe3-169baad3e421"initrd="/boot/initramfs-4.18.0-305.3.1.el8_4.x86_64.img $tuned_initrd"title="Red Hat Enterprise Linux (4.18.0-305.3.1.el8_4.x86_64) 8.4 (Ootpa)"id="ec2fa869f66b627b3c98f33dfa6bc44d-4.18.0-305.3.1.el8_4.x86_64"Note the path of the kernel that you want to set as the default for your instance. In the preceding example, the path for the kernel at index 2 is /boot/vmlinuz- 0-4.18.0-80.4.2.el8_1.x86_64.3.    Run the grubby --set-default command to change the default kernel of the instance:grubby --set-default=/boot/vmlinuz-4.18.0-305.3.1.el8_4.x86_64Note: Replace 4.18.0-305.3.1.el8_4.x86_64 with your kernel's version number.4.    Run the grubby --default-kernel command to verify that the preceding command worked:grubby --default-kernelIf you're accessing the instance using the EC2 Serial Console, then the stable kernel now loads and you can reboot the instance.If you're using a rescue instance, then complete the steps in the following section.Unmount volumes, detach the root volume from the rescue instance, and then attach the volume to the impaired instanceNote: Complete the following steps if you used Method 2: Use a rescue instance to access the root volume.1.    Exit from chroot, and unmount /dev, /run, /proc, and /sys:exitumount /mnt/{dev,proc,run,sys,}2.    From the Amazon EC2 console, choose Instances, and then choose the rescue instance.3.    Choose Instance State, Stop instance, and then select Yes, Stop.4.    Detach the root volume id-xxxxx (the volume from the impaired instance) from the rescue instance.5.    Attach the root volume you detached in step 4 to the impaired instance as the root volume (/dev/sda1), and then start the instance.Note: The root device differs by AMI. The names /dev/xvda or /dev/sda1 are reserved for the root device. For example, Amazon Linux 1 and 2 use /dev/xvda. Other distributions, such as Ubuntu 14, 16, 18, CentOS 7, and RHEL 7.5, use /dev/sda1.The stable kernel now loads and your instance reboots.Follow"
https://repost.aws/knowledge-center/revert-stable-kernel-ec2-reboot
How can I set the number or size of files when I run a CTAS query in Athena?
"When I run a CREATE TABLE AS SELECT (CTAS) query in Amazon Athena, I want to define the number of files or the amount of data per file."
"When I run a CREATE TABLE AS SELECT (CTAS) query in Amazon Athena, I want to define the number of files or the amount of data per file.ResolutionUse bucketing to set the file size or number of files in a CTAS query.Note: The following steps use the Global Historical Climatology Network Daily public dataset (s3://noaa-ghcn-pds/csv.gz/) to illustrate the solution. For more information about this dataset, see Visualize over 200 years of global climate data using Amazon Athena and Amazon QuickSight. These steps show how to examine your dataset, create the environment, and then modify the dataset:Modify the number of files in the Amazon Simple Storage Service (Amazon S3) dataset.Set the approximate size of each file.Convert the data format and set the approximate file size.Examine the datasetRun the following AWS Command Line Interface (AWS CLI) to verify the number of files and the size of the dataset:Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.aws s3 ls s3://noaa-ghcn-pds/csv.gz/ --summarize --recursive --human-readableThe output looks similar to the following:2019-11-30 01:58:05 3.3 KiB csv.gz/1763.csv.gz2019-11-30 01:58:06 3.2 KiB csv.gz/1764.csv.gz2019-11-30 01:58:06 3.3 KiB csv.gz/1765.csv.gz2019-11-30 01:58:07 3.3 KiB csv.gz/1766.csv.gz...2019-11-30 02:05:43 199.7 MiB csv.gz/2016.csv.gz2019-11-30 02:05:50 197.7 MiB csv.gz/2017.csv.gz2019-11-30 02:05:54 197.0 MiB csv.gz/2018.csv.gz2019-11-30 02:05:57 168.8 MiB csv.gz/2019.csv.gzTotal Objects: 257Total Size: 15.4 GiBCreate the environment1.    Run a statement similar to the following to create a table:CREATE EXTERNAL TABLE historic_climate_gz( id string, yearmonthday int, element string, temperature int, m_flag string, q_flag string, s_flag string, obs_time int)ROW FORMAT DELIMITED FIELDS TERMINATED BY ','STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'LOCATION 's3://noaa-ghcn-pds/csv.gz/'2.    Run the following command to test the table:SELECT * FROM historic_climate_gz LIMIT 10The output shows ten lines from the dataset. After the environment is created, use one or more of the following methods to modify the dataset when you run CTAS queries.Modify the number of files in the datasetIt's a best practice to bucket data by a column that has high cardinality and evenly distributed values. For more information, see Bucketing vs Partitioning. In the following example, we use the yearmonthday field.1.    To convert the dataset into 20 files, run a statement similar to the following:CREATE TABLE "historic_climate_gz_20_files"WITH ( external_location = 's3://awsexamplebucket/historic_climate_gz_20_files/', format = 'TEXTFILE', bucket_count=20, bucketed_by = ARRAY['yearmonthday'] ) ASSELECT * FROM historic_climate_gzReplace the following values in the query:external_location: Amazon S3 location where Athena saves your CTAS queryformat: format that you want for the output (such as ORC, PARQUET, AVRO, JSON, or TEXTFILE)bucket_count: number of files that you want (for example, 20)bucketed_by: field for hashing and saving the data in the bucket (for example, yearmonthday)2.    Run the following command to confirm that the bucket contains the desired number of files:aws s3 ls s3://awsexamplebucket/historic_climate_gz_20_files/ --summarize --recursive --human-readableTotal Objects: 20Total Size: 15.6 GibSet the approximate size of each file1.    Determine how many files you need to achieve the desired file size. For example, to split the 15.4 GB dataset into 2 GB files, you need 8 files (15.4 / 2 = 7.7, rounded up to 8).2.    Run a statement similar to the following:CREATE TABLE "historic_climate_gz_2GB_files"WITH ( external_location = 's3://awsexamplebucket/historic_climate_gz_2GB_file/', format = 'TEXTFILE',    bucket_count=8,     bucketed_by = ARRAY['yearmonthday']) ASSELECT * FROM historic_climate_gzReplace the following values in the query:external_location: Amazon S3 location where Athena saves your CTAS queryformat: must be the same format as the source data (such as ORC, PARQUET, AVRO, JSON, or TEXTFILE)bucket_count: number of files that you want (for example, 20)bucketed_by: field for hashing and saving the data in the bucket. Choose a field with high cardinality.3.    Run the following command to confirm that the dataset contains the desired number of files:aws s3 ls s3://awsexamplebucket/historic_climate_gz_2GB_file/ --summarize --recursive --human-readableThe output looks similar to the following:2019-09-03 10:59:20 1.7 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00000.gz2019-09-03 10:59:20 2.0 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00001.gz2019-09-03 10:59:20 2.0 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00002.gz2019-09-03 10:59:19 1.9 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00003.gz2019-09-03 10:59:17 1.7 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00004.gz2019-09-03 10:59:21 1.9 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00005.gz2019-09-03 10:59:18 1.9 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00006.gz2019-09-03 10:59:17 1.9 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00007.gzTotal Objects: 8Total Size: 15.0 GiBConvert the data format and set the approximate file size1.    Run a statement similar to the following to convert the data to a different format:CREATE TABLE "historic_climate_parquet"WITH ( external_location = 's3://awsexamplebucket/historic_climate_parquet/', format = 'PARQUET') ASSELECT * FROM historic_climate_gzReplace the following values in the query:external_location: Amazon S3 location where Athena saves your CTAS query format: format that you want to covert to (ORC,PARQUET, AVRO, JSON, or TEXTFILE)2.    Run the following command to confirm the size of the dataset:aws s3 ls s3://awsexamplebucket/historic_climate_parquet/ --summarize --recursive --human-readableThe output looks similar to the following:Total Objects: 30Total Size: 9.8 GiB3.    Determine how many files that you need to achieve the desired file size. For example, if you want 500 MB files and the dataset is 9.8 GB, then you need 20 files (9,800 / 500 = 19.6, rounded up to 20).4.    To convert the dataset into 500 MB files, run a statement similar to the following:CREATE TABLE "historic_climate_parquet_500mb"WITH ( external_location = 's3://awsexamplebucket/historic_climate_parquet_500mb/', format = 'PARQUET', bucket_count=20, bucketed_by = ARRAY['yearmonthday'] ) ASSELECT * FROM historic_climate_parquetReplace the following values in the query:external_location: Amazon S3 location where Athena saves your CTAS query bucket_count: number of files that you want (for example, 20)bucketed_by: field for hashing and saving the data in the bucket. Choose a field with high cardinality.5.    Run the following command to confirm that the dataset contains the desired number of files:aws s3 ls s3://awsexamplebucket/historic_climate_parquet_500mb/ --summarize --recursive --human-readableThe output looks similar to the following:2019-09-03 12:01:45 333.9 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000002019-09-03 12:01:01 666.7 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000012019-09-03 12:01:00 665.6 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000022019-09-03 12:01:06 666.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000032019-09-03 12:00:59 667.3 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000042019-09-03 12:01:27 666.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000052019-09-03 12:01:10 666.5 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000062019-09-03 12:01:12 668.3 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000072019-09-03 12:01:03 666.8 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000082019-09-03 12:01:10 646.4 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000092019-09-03 12:01:35 639.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000102019-09-03 12:00:52 529.5 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000112019-09-03 12:01:29 334.2 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000122019-09-03 12:01:32 333.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000132019-09-03 12:01:34 332.2 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000142019-09-03 12:01:44 333.3 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000152019-09-03 12:01:51 333.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000162019-09-03 12:01:39 333.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000172019-09-03 12:01:47 333.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000182019-09-03 12:01:49 332.3 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-00019Total Objects: 20Total Size: 9.9 GiBNote: The INSERT INTO statement isn't supported on bucketed tables. For more information, see Bucketed tables not supported.Related informationExamples of CTAS queriesConsiderations and limitations for CTAS queriesFollow"
https://repost.aws/knowledge-center/set-file-number-size-ctas-athena
What support is available for AWS Marketplace rule groups for AWS WAF?
I need support related to the AWS Marketplace AWS WAF rules. Whom can I contact?
"I need support related to the AWS Marketplace AWS WAF rules. Whom can I contact?Short descriptionAWS WAF provides AWS Marketplace rule groups to help you protect your resources. AWS Marketplace rule groups are collections of predefined, ready-to-use rules that are written and updated by AWS Marketplace sellers. For issues related to the AWS WAF Marketplace rules regarding false positives or false negatives, contact the Marketplace seller for further troubleshooting.ResolutionAWS Marketplace managed rule groups are available by subscription through the AWS Marketplace. After you subscribe to an AWS Marketplace managed rule group, you can use it in AWS WAF. To use an AWS Marketplace rule group in an AWS Firewall Manager AWS WAF policy, each account in your organization must subscribe to it.TroubleshootingIf an AWS Marketplace rule group is blocking legitimate traffic, then follow these steps:Exclude specific rules that are blocking legitimate trafficYou can identify the rules that are blocking the requests using either the AWS WAF sampled requests or logging for a web ACL. You can identify the rules in a rule group by viewing the Rule inside rule group in the sampled request or the ruleGroupId field in the web ACL log. For more information, see Access to the rules in an AWS Marketplace rule group.Identify the rule using this pattern:<SellerName>#<RuleGroupName>#<RuleName>Change the action for the AWS Marketplace rule groupIf excluding specific rules doesn't solve the problem, then override a rule group's action from No override to Override to count. Doing this allows the web request to pass through, regardless of the individual rule actions within the rule group. Doing this also provides you with Amazon CloudWatch metrics for the rule group.If the issue continues after setting the AWS Marketplace rule group action to Override to count, then contact the rule group provider's customer support team.Note: For problems with a rule group that is managed by an AWS Marketplace seller, you must contact the provider’s customer support team for further troubleshooting.AWS WAF Marketplace seller's contact informationCloudbric Corp.For issues related to Cloudbric Corp. managed rule groups, see the Cloudbric help center to submit a request.Cyber Security Cloud Inc.For issues related to Cyber Security Cloud managed rules, see Contact Cyber Security Cloud support.F5 (DevCentral)F5 rules for AWS WAF are supported on DevCentral. For issues related to F5 managed rules, submit a question with the tag F5 rules for AWS WAF to DevCentral's technical forum.For more information about F5 rules for AWS WAF, see Overview of F5 rule groups for AWS WAF on the AskF5 website.FortinetFor issues related to Fortinet managed rule groups, send an email to the Fortinet support team.For more information about deploying Fortinet rules for AWS WAF, see Technical tip: deploying Fortinet AWS WAF partner rule groups on the Fortinet community website.GeoGuardFor issues related to GeoGuard managed rule groups, send an email to GeoGuard support.ImpervaFor issues related to an Imperva managed rule groups, send an email to Imperva support.For more information about Imperva rules for AWS WAF, see Get started with Imperva managed rules on AWS WAF on the Imperva documentation website.ThreatSTOPFor issues related to ThreatSTOP managed rule groups, see Contact ThreatSTOP.Related informationAWS Marketplace managed rule groupsFollow"
https://repost.aws/knowledge-center/waf-marketplace-support
Why are some of my AWS Glue tables missing in Athena?
Some of the tables that I see in the AWS Glue console aren't visible in the Amazon Athena console.
"Some of the tables that I see in the AWS Glue console aren't visible in the Amazon Athena console.ResolutionYou might see more tables in the AWS Glue console than in the Athena console for the following reasons:Different data sourcesIf you created tables that point to different data sources, then the consoles show tables from different sets of data. The Athena console shows only those tables that point to Amazon Simple Storage Service (Amazon S3) paths. AWS Glue lists tables that point to different data sources, such as Amazon Relational Database Service (Amazon RDS) DB instances and Amazon DynamoDB tables. For more information on using Athena to query data from different sources, see Connecting to data sources and Using Amazon Athena Federated Query.Unsupported table formatsYour tables doesn't appear in the Athena console if you created them in formats that aren't supported by Athena, such as XML. These tables appear in the AWS Glue Data Catalog, but not in the Athena console. For a list of supported formats, see Supported SerDes and data formats.Unavailable resources from AWS Lake FormationResources in Lake Formation aren't automatically shared with Athena or granted permissions. To make sure that resources are accessible between these services, create policies that allow your resources permission to Athena. For managing resource policies at scale within a single account, use tag-based asset control. For a detailed guide of this process, see Easily manage your data lake at scale using AWS Lake Formation Tag-based access control.For managing resource policies across accounts, you can use tag-based asset control or named resources. For a detailed guide of both options, see Securely share your data across AWS accounts using AWS Lake Formation.Related informationWhat is Amazon Athena?Adding classifiers to a crawler in AWS GlueFollow"
https://repost.aws/knowledge-center/athena-glue-tables-not-visible
Why can't my EC2 instances access the internet using a NAT gateway?
"I created a network address translation (NAT) gateway so that my Amazon Elastic Compute Cloud (Amazon EC2) instances can connect to the internet. However, I can't access the internet from my EC2 instances. Why can't my EC2 instances access the internet using a NAT gateway?"
"I created a network address translation (NAT) gateway so that my Amazon Elastic Compute Cloud (Amazon EC2) instances can connect to the internet. However, I can't access the internet from my EC2 instances. Why can't my EC2 instances access the internet using a NAT gateway?ResolutionInternet connectivity issues with NAT gateways are typically caused by subnet misconfigurations or missing routes. To troubleshoot issues connecting to the internet with your NAT gateway, verify the following:The subnet where the NAT gateway is launched is associated with a route table that has a default route to an internet gateway.The subnet where your EC2 instances are launched is associated with a route table that has a default route to the NAT gateway.Outbound internet traffic is allowed in both the security groups and the network access control list (ACL) that is associated with your source instance.The network ACL associated with the subnet where the NAT gateway is launched allows inbound traffic from the EC2 instances and the internet hosts. Also verify that the network ACL allows outbound traffic to the internet hosts and to the EC2 instances. For example, to allow your EC2 instances to access an HTTPS website, the network ACL associated with the NAT gateway subnet must have the rules as listed in this table.Inbound rules:SourceProtocolPort RangeAllow / DenyVPC CIDRTCP443ALLOWInternet IPTCP1024-65535ALLOWOutbound rules:DestinationProtocolPort RangeAllow / DenyInternet IPTCP443ALLOWVPC CIDRTCP1024-65535ALLOWRelated informationHow do I set up a NAT gateway for a private subnet in Amazon VPC?Work with NAT gatewaysAccess the internet from a private subnetHow do I use the VPC Reachability Analyzer to troubleshoot connectivity issues with an Amazon VPC resource?Follow"
https://repost.aws/knowledge-center/ec2-access-internet-with-NAT-gateway
How do I assign a new parameter group to my existing Amazon ElastiCache cluster without restarting the cluster?
How do I assign a new parameter group to my existing Amazon ElastiCache cluster without restarting the cluster?
"How do I assign a new parameter group to my existing Amazon ElastiCache cluster without restarting the cluster?Short descriptionA parameter group acts as a container for engine configuration values. You can apply a parameter group to one or more cache clusters. ElastiCache uses this parameter group to control the runtime properties of your nodes and clusters. You can modify the values in a parameter group for an existing cluster without restarting the cluster.ResolutionCreate a new parameter group. For detailed steps, see Creating a parameter group.Modify the new parameter group and the parameters. For more information, see Modifying a parameter group.Modify the cluster to use the new parameter group. For detailed steps, see Modifying an ElastiCache cluster.Related informationParameter managementRedis-specific parametersMemcached specific parametersFollow"
https://repost.aws/knowledge-center/elasticache-assign-parameter-group
How can I download the full SQL text from Performance Insight for my Aurora PostgreSQL-Compatible instance?
I want to download the full SQL text from Performance Insights for my Amazon Aurora PostgreSQL-Compatible Edition DB instance.
"I want to download the full SQL text from Performance Insights for my Amazon Aurora PostgreSQL-Compatible Edition DB instance.Short descriptionAurora PostgreSQL-Compatible handles text in Performance Insights differently from other engine types, like Aurora MySQL-Compatible. By default, each row under the Top SQL tab on the Performance Insights dashboard shows 500 bytes of SQL text for each SQL statement. When a SQL statement exceeds 500 bytes, you can view more text in the SQL text section that's below the Top SQL table. The maximum length for the text displayed in the SQL text section is 4 KB. If the SQL statement exceeds 4096 characters, then the truncated version is displayed on the SQL text section. But, you can download the full SQL text from the SQL text section of the TOP SQL tab.The track_activity_query_size DB parameter specifies the amount of memory that's reserved to store the text of the currently running command for each active session. This determines the maximum query length to display in the pg_stat_activity query column. To set the text limit size for SQL statements and store that limit on the database, modify the track_activity_query_size parameter. You can modify this parameter at the instance or cluster parameter group level. See the minimum and maximum allowed values for the text limit size for SQL statements:Aurora_Postgres_VersionMinimumMaximum10.x10010240011.x10010240012.x10010240013.x100104857614.x1001048576ResolutionYou can download the full SQL text from Performance Insights using the Amazon Relational Database Service (Amazon RDS) console. If the full SQL text size exceeds the value of track_activity_query_size, then increase the value of track_activity_query_size before you download the SQL text. The track_activity_query_size parameter is static, so you must reboot the cluster after you've changed its value.For example, the SQL text size might be set to 1 MB, and track_activity_query_size is set to the default value of 4096 bytes. In this case, the full SQL can't be downloaded. When the engine runs the SQL text to Performance Insights, the Amazon RDS console displays only the first 4 KB. Increase the value of track_activity_query_size to 1 MB or larger, and then download the full query. In this case, viewing and downloading the SQL text return a different number of bytes.In the Performance Insights dashboard, you can view or download the full SQL text by following these steps:1.    Open the Amazon RDS console.2.    In the navigation pane, choose Performance Insights.3.    Choose the DB instance that you want to view Performance Insights for.4.    From the Top SQL tab, choose the SQL statement that you want to view.5.    Under the SQL text tab, you can view up to 4,096 bytes for each SQL statement. If the SQL statement falls within this limit, then choose Copy to copy the SQL.6.    If the SQL statement is larger than 4,096, then it's truncated in this view. Choose Download to download the full SQL.Note: Be sure that the track_activity_query_size parameter is set to a larger value than the SQL statement that you want to download.Related informationViewing Aurora PostgreSQL DB cluster and DB parametersRebooting an Aurora cluster (Aurora PostgreSQL and Aurora MySQL before version 2.10)Follow"
https://repost.aws/knowledge-center/aurora-postgresql-performance-insights
How do I troubleshoot the error "READONLY You can't write against a read only replica" after failover of my Redis (cluster mode disabled) cluster?
Why am I receiving the "READONLY You can't write against a read only replica" error in my Amazon ElastiCache for Redis (cluster mode disabled) cluster after failover?
"Why am I receiving the "READONLY You can't write against a read only replica" error in my Amazon ElastiCache for Redis (cluster mode disabled) cluster after failover?Short descriptionIf the primary node failed over to the replica nodes in your Amazon ElastiCache cluster, then the replica takes the role of primary node to serve incoming requests. However, in the following scenarios you receive the READONLY error:You're using a node endpoint instead of primary endpoint of the cluster in your application.-or-DNS caching in the application node routes traffic to the old primary node.Resolution1.    Verify that the cluster is cluster mode disabled. To do this:Open the ElastiCache console, and then select Redis clusters. Verify that the Cluster Mode for the cluster is off.Note: If the Cluster Mode is on, see I'm using ElastiCache or Redis. Why are my Redis client read requests always read from or redirected to the primary node of a shard?2.    Verify that you're sending the write command to the primary endpoint instead of the node endpoint. To validate that the write command is going to the primary node, do one of the following:Option 1Connect to the Redis cluster using the redis-cli and then run the get key command for the updated key. Then, verify the command output to verify that the key value updated after the last command.For example, the following command sets the key1 value to hello:set key1 "hello" OKTo verify that the key set to correctly, run the get command:get key1"hello"Option 2Connect to the Redis cluster using the redis-cli and then run command the MONITOR command. This lists all commands coming to the cluster. Keep in mind that running a single MONITOR client might reduce throughput by more than 50%.3.    To avoid DNS caching issues, turn on retry logic in your application following the guidelines for the Redis client library that your application uses.Follow"
https://repost.aws/knowledge-center/elasticache-correct-readonly-error
How do I upgrade my Elastic Beanstalk environment platform from a deprecated or retired version to the latest version?
"I received a notification that my AWS Elastic Beanstalk platform is a deprecated version. Or, I received a notification that my platform version is marked for retirement."
"I received a notification that my AWS Elastic Beanstalk platform is a deprecated version. Or, I received a notification that my platform version is marked for retirement.Short descriptionDeprecated platform versions are the old platform versions or branches that are available to customers, but aren't recommended by AWS. Deprecated versions might have missing security updates, hot fixes, or the latest versions of other components, such as the web server.Elastic Beanstalk marks platform branches as retired when a component of a supported platform branch is marked End of Life (EOL) by its supplier. Components of a platform branch might be the operating system, runtime, application server, or web server.When a platform branch is marked as retired, it's no longer available to new Elastic Beanstalk customers for deployments to new environments. There's a 90-day grace period from the published retirement date for existing customers with active environments that are running on retired platform branches. When a platform version is mark deprecated, it's available for customers to use until it's marked for retirement.ResolutionMigrate from a retired platformTo upgrade to the latest platform, perform a blue/green deployment. Blue/green deployments deploy a separate environment with the latest platform branch and version. Then, swap the CNAMEs of the two environments to redirect traffic from the old environment to the new environment.Note: Both environments must be in same application and in a working state to swap CNAMEs.For more information, see Blue/Green deployments with Elastic Beanstalk.To check for the retired platform branches, see Elastic Beanstalk platform versions scheduled for retirement.Migrate from a deprecated platformA platform version might be marked for deprecation due to kernel changes, web server changes, security fixes, hot fixes, and so on. These changes are categorized as follows:Patch: Patch version updates provide bug fixes and performance improvements. Patch updates might include minor configuration changes to the on-instance software, scripts, and configuration options.Minor: Minor version updates provide support for new Elastic Beanstalk features.Major: Major version updates provide different kernels, web servers, application servers, and so on.Based on the changes being made, use one of the following migration methods:Minor or patch updatesWith minor or patch changes, your platform branch remains the same. For instructions, see Method 1 - Update your environment's platform version.You can also have Elastic Beanstalk manage platform updates for you. For more information, see Managed platform updates.Major updatesYour platform branch changes in major updates. When you switch platform branches, you must perform a blue/green deployment. You must also use blue/green deployments when migrating from Amazon Linux 1 to Amazon Linux 2 or from a legacy platform to a current platform. For more information, see Method 2 - Perform a blue/green deployment.Related informationUpdating your Elastic Beanstalk environment's platform versionFollow"
https://repost.aws/knowledge-center/elastic-beanstalk-upgrade-platform
How do I use AWS CloudTrail to track API calls to my Amazon EC2 instances?
"I want to track API calls that run, stop, start, and terminate my Amazon Elastic Compute Cloud (Amazon EC2) instances. How do I search for API calls to my Amazon EC2 instances using AWS CloudTrail?"
"I want to track API calls that run, stop, start, and terminate my Amazon Elastic Compute Cloud (Amazon EC2) instances. How do I search for API calls to my Amazon EC2 instances using AWS CloudTrail?Short descriptionAWS CloudTrail allows you to identify and track four types of API calls (event types) made to your AWS account:RunInstancesStopInstancesStartInstancesTerminateInstancesTo review these types of API calls after they've been made to your account, you can use any of the following methods.Note: You can view event history for your account up to the last 90 days.ResolutionTo track API calls using CloudTrail event history1.    Open the CloudTrail console.2.    Choose Event history.3.    For Filter, select Event name from the dropdown list.4.    For Enter event name, enter the event type that you want to search for. Then, choose the event type.5.    For Time range, enter the desired time range that you want to track the event type for.6.    Choose Apply.For more information, see Viewing events with CloudTrail event history and Viewing Cloudtrail events in the CloudTrail console.To track API calls using Amazon Athena queriesFollow the instructions in How do I automatically create tables in Amazon Athena to search through AWS CloudTrail logs?The following are example queries for the RunInstances API call. You can use similar queries for any of the supported event types.Important: Replace cloudtrail-logs with your Athena table name before running any of the following query examples.Example query to return all available event information for the RunInstances API callSELECT *FROM cloudtrail-logsWHERE eventName = 'RunInstances'Example query to return filtered event information for the RunInstances API callSELECT userIdentity.username, eventTime, eventNameFROM cloudtrail-logsWHERE eventName = 'RunInstances'Example query to return event information for the APIs that end with the string "Instances" from a point in time to the current dateImportant: Replace '2021-07-01T00:00:01Z' with the point in time you'd like to return event information from.SELECT userIdentity.username, eventTime, eventNameFROM cloudtrail-logsWHERE (eventName LIKE '%Instances') AND eventTime > '2021-07-01T00:00:01Z'To track API calls using archived Amazon CloudWatch Logs in Amazon Simple Storage Service (Amazon S3)Important: To log events to an Amazon S3 bucket, you must first create a CloudWatch trail.1.    Access your CloudTrail log files by following the instructions in Finding your CloudTrail log files.2.    Download your log files by following the instructions in Downloading your CloudTrail log files.3.    Search through the logs for the event types that you want to track using jq or another JSON command line processor.Example jq procedure for searching CloudWatch logs downloaded from Amazon S3 for specific event types1.    Open a Bash terminal. Then, create the following directory to store the log files:$ mkdir cloudtrail-logs4.    Navigate to the new directory. Then, download the CloudTrail logs by running the following command:Important: Replace the example my_cloudtrail_bucket with your Amazon S3 bucket.$ cd cloudtrail-logs$ aws s3 cp s3://my_cloudtrail_bucket/AWSLogs/012345678901/CloudTrail/eu-west-1/2019/08/07 ./ --recursive5.    Decompress the log files by running the following gzip command:Important: Replace * with the file name that you want to decompress.$ gzip -d *6.    Run a jq query for the event types that you want to search for.Example jq query to return all available event information for the RunInstances API callcat * | jq '.Records[] | select(.eventName=="RunInstances")'Example jq query to return all available event information for the StopInstances and TerminateInstances API callscat * | jq '.Records[] | select(.eventName=="StopInstances" or .eventName=="TerminateInstances" )'Related informationHow can I use CloudTrail to review what API calls and actions have occurred in my AWS account?Creating metrics from log events using filtersAWS Config console now displays API events associated with configuration changesFollow"
https://repost.aws/knowledge-center/cloudtrail-search-api-calls
How do I verify an email address or domain in Amazon SES?
I want to verify an email address or domain that I'm using with Amazon Simple Email Service (Amazon SES). How can I do that?
"I want to verify an email address or domain that I'm using with Amazon Simple Email Service (Amazon SES). How can I do that?ResolutionTo verify an email address, see Verifying an email address identity.To verify a domain, see Verifying a DKIM domain identity with your DNS provider for instructions.Note: If you're using Amazon Route 53 as your DNS provider, then Amazon SES can automatically add your domain or DKIM verification CNAME records to your DNS records. If you aren't using Route 53, you must work with your DNS provider to update your DNS records.Related informationVerified identities in Amazon SESFollow"
https://repost.aws/knowledge-center/ses-verify-email-domain
How do I view objects that failed replication from one Amazon S3 bucket to another?
I want to retrieve a list of objects that failed replication when setting up replication from one Amazon Simple Storage Service (Amazon S3) bucket to another bucket.
"I want to retrieve a list of objects that failed replication when setting up replication from one Amazon Simple Storage Service (Amazon S3) bucket to another bucket.Short descriptionYou can turn on S3 Replication Time Control (S3 RTC) to set up event notifications for eligible objects that failed replication. You can also use S3 RTC to set up notifications for eligible objects that take longer than 15 minutes to replicate. Additionally, you can get a list of objects that failed replication in one of the following ways:Reviewing the Amazon S3 inventory reportRunning the HeadObject API callResolutionAmazon S3 inventory reportAmazon S3 inventory reports list your objects and their metadata on a daily or weekly basis. The replication status of an object can be PENDING, COMPLETED, FAILED, or REPLICA.To find objects that failed replication, filter a recent report for objects with the replication status of FAILED. Then, you can initiate a manual copy of the objects to the destination bucket. You can also re-upload the objects to the source bucket (after rectifying the permissions) to initiate replication.You can also use Amazon Athena to query the inventory report for replication statuses.HeadObject API callFor a list of the objects in the source bucket that are set for replication, you can run the HeadObject API call on the objects. HeadObject returns the PENDING, COMPLETED, or FAILED replication status of an object. In a response to a HeadObject API call, the replication status is found in the x-amz-replication-status element.Note: To run HeadObject, you must have read access to the object that you're requesting. A HEAD request has the same options as a GET request, but without performing a GET.After HeadObject returns the objects with a FAILED replication status, you can initiate a manual copy of the objects to the destination bucket. You can also re-upload the objects to the source bucket (after rectifying the permissions) to initiate replication.Important: If you manually copy objects into the destination bucket, then the Amazon S3 inventory report and HeadObject API calls return a FAILED replication status. This replication status is for the objects in the source bucket. To change the replication status of an object and initiate replication, you must re-upload the object to the source bucket. If the new replication is successful, then the object's replication status changes to COMPLETED. If you must manually copy objects into the destination bucket, then be sure to note the date of the manual copy. Then, filter objects with a FAILED replication status by the last modified date. Doing this lets you to identify which objects are or aren't copied to the destination bucket.Follow"
https://repost.aws/knowledge-center/s3-list-objects-failed-replication
How do I upload my Windows logs to CloudWatch?
I want to upload my Windows logs to Amazon CloudWatch.
"I want to upload my Windows logs to Amazon CloudWatch.ResolutionUpload your Windows logs to CloudWatch with AWS Systems Manager and Amazon CloudWatch agent. Then, store the configuration file in the SSM Parameter Store, a capability of AWS Systems Manager.Create IAM rolesCreate server and administrator AWS Identity and Access Management (IAM) roles to use with the CloudWatch agent. The server role allows instances to upload metrics and logs to CloudWatch. The administrator role creates and stores the CloudWatch configuration template in the Systems Manager Parameter Store.Note: Be sure to follow both IAM role creation procedures to limit access to the admin role.Attach the server roleAttach the server role to any Elastic Compute Cloud (Amazon EC2) instances that you want to upload your logs for.Attach the administrator roleAttach the administrator role to your administrator configuration instance.Install the CloudWatch agent packageDownload and install the CloudWatch agent package with AWS Systems Manager Run Command. In the Targets area, choose your server instances and your administrator instance.Note: Before you install the CloudWatch agent, be sure to update or install SSM agent on the instance.Create the CloudWatch agent configuration fileCreate the CloudWatch agent configuration file on your administrator instance using the configuration wizard. Store the file in the Parameter Store. Record the Parameter Store name that you choose. For an example configuration with logs, see CloudWatch agent configuration file: Logs section.To create your configuration file, complete the following steps:Run PowerShell as an administrator.To start the configuration wizard, open Command Prompt. Then, run the .exe file that's located at C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-config-wizard.exe.To create the configuration file, answer the following questions in the configuration wizard:On which OS are you planning to use the agent?Select Windows.Are you using EC2 or On-Premises hosts?Select Ec2.Do you have any existing CloudWatch Log Agent configuration file to import for migration?Select No.Do you want to monitor any host metrics?If you want to push only logs, then select No.Do you want to monitor any customized log files?If you want to push only default Windows Event Logs, then select No. If you also want to push custom logs, then select Yes.Do you want to monitor any Windows event log?If you want to push Windows Event Logs, then select Yes.When the configuration wizard prompts you to store your file in Parameter Store, select Yes to use the parameter in SSM.Apply your configurationTo apply the configuration to the server instances and start uploading logs, start the CloudWatch agent using Systems Manager Run Command.For Targets, choose your server instances.For Optional Configuration Location, enter the Parameter Store name that you chose in the wizard.Related informationCollect metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agentQuick Start: Install and configure the CloudWatch Logs agent on a running EC2 Linux instanceFollow"
https://repost.aws/knowledge-center/cloudwatch-upload-windows-logs
Why can't I resolve service domain names for an interface VPC endpoint?
"I'm using an interface Amazon Virtual Private Cloud (Amazon VPC) endpoint for an AWS service. I want to use the default service domain name (for example, ec2.us-east-1.amazonaws.com) to access the service through the VPC interface endpoint. Why can't I resolve service domain names for an interface VPC endpoint?"
"I'm using an interface Amazon Virtual Private Cloud (Amazon VPC) endpoint for an AWS service. I want to use the default service domain name (for example, ec2.us-east-1.amazonaws.com) to access the service through the VPC interface endpoint. Why can't I resolve service domain names for an interface VPC endpoint?ResolutionTo resolve service domain names (for example, ec2.us-east-2-amazonaws.com) for an interface VPC endpoint, keep the following in mind:To resolve service domain names to the interface VPC endpoint's private IPs, you must send the DNS queries to the Amazon-provided DNS of the VPC where the interface endpoint is created. The Amazon-provided DNS is the base of the VPC CIDR plus two.On the VPC where you created the interface VPC endpoint, verify that both DNS attributes of the VPC, DNS Hostnames and DNS Resolution, are turned on.When using interface VPC endpoints to access available AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2), you can turn on private DNS names on the endpoint. When you have this parameter turned on, queries for the service domain name resolve to private IP addresses. These private IP addresses are the IP addresses of the elastic network interfaces created in each of the associated subnets for a given interface endpoint.With private DNS names turned on, you can run AWS API calls using the service domain name (for example, ec2.us-east-1.amazonaws.com) over AWS PrivateLink.For the interface VPC endpoint, verify that private DNS names is turned on. If private DNS names isn't turned on, the service domain name or endpoint domain name resolves to regional public IPs. For steps to turn on private DNS names, see Modify an interface endpoint.You can designate custom domain name servers in the DHCP Option Set for the VPC. When using custom domain name servers, the DNS queries for the service domain names are sent to the custom domain name servers for resolution. The custom domain name servers might be located within the VPC or outside of the VPC.Custom domain name servers must forward the service domain name to the Amazon-provided DNS server of the VPC where the interface endpoints are created.If you're trying to access an interface endpoint from outside of the VPC (cross-VPC or on-premises), make sure that you have the DNS architecture in place. The DNS architecture should forward the DNS queries for the service domain name to the Amazon-provided DNS server of the VPC where the interface endpoints are created.You can use tools such as nslookup or dig against the service domain name from the source network to confirm the IPs that it's resolving to.Or, you can use regional endpoint domain names on your SDK to execute API calls. The regional endpoint domain names of the interface endpoints are resolvable from any network. The following is an example for performing a describe call using the AWS Command Line Interface (AWS CLI):$aws ec2 describe-instances --endpoint-url https://vpce-aaaabbbbcccc-dddd.vpce-svc-12345678.us-east-1.vpce.amazonaws.comIf you created an Amazon Route 53 private hosted zone for the service domain name, make sure that you attach the correct source VPC to the hosted zone. For more information, see How can I troubleshoot DNS resolution issues with my Route 53 private hosted zone?Note: You must establish connectivity from the network to the VPC using VPC peering, AWS Transit Gateway, and so on, for routing DNS queries.Related informationHow do I configure a Route 53 Resolver inbound endpoint to resolve DNS records in my private hosted zone from my remote network?How do I configure a Route 53 Resolver outbound endpoint to resolve DNS records hosted on a remote network from resources in my VPC?Follow"
https://repost.aws/knowledge-center/vpc-interface-configure-dns
How do I configure logging levels manually for specific resources in AWS IoT Core?
I want to configure resource-specific logging manually for my AWS IoT Core logs.
"I want to configure resource-specific logging manually for my AWS IoT Core logs.Short descriptionNote: This article relates only to V2 of AWS IoT Core logs.AWS IoT Core logs allows you to set resource-specific logging levels for:Clients registered as thingsClients not registered as thingsThis is done by creating a logging level for a specific target type and configuring its verbosity level. Target types include THING_GROUP, CLIENT_ID, SOURCE_IP, or PRINCIPAL_ID. It's a best practice to configure default logging to a lower verbosity level and configure resource-specific logging to a higher verbosity level.Log verbosity levels include DISABLED (lowest), ERROR, WARN, INFO, and DEBUG (highest).Important: Depending on your AWS IoT Core fleet size, turning on more verbose log levels can incur high costs and make troubleshooting more difficult. Turning on verbose logging also creates higher data traffic. INFO or DEBUG should only be used as a temporary measure while troubleshooting. After troubleshooting is complete, logging levels should be set back to a less verbose setting.ResolutionPrerequisiteMake sure that you have the AWS Command Line Interface (AWS CLI) installed locally with IoT admin permission credentials. The default AWS Region for AWS CLI must point towards the targeted AWS Region. You must have clients connected to and interacting with your AWS IoT Core endpoints, either as registered or non-registered IoT things.Note: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.Configure manual logging for clients registered as thingsYou can manage resource-specific logging for multiple things at a defined logging level, and then add or remove things from the thing group manually. Your devices and clients must be registered as IoT things in AWS IoT Core and must connect using the same client ID associated thing name. You can then use a static thing group with a target type of THING_GROUP to manage the thing group. If you configure a parent thing group within a hierarchy, then the configuration applies to the child thing groups of the hierarchy as well.Note: If you use static thing groups as a target type, then you must consider their quota limits. For more information, see AWS IoT Core thing group resource limits and quotas.1.    Create two static thing groups. You can do this using the AWS IoT console or using the create-thing-group command in the AWS CLI. In this example, the AWS CLI is used.aws iot create-thing-group --thing-group-name logging_level_infoaws iot create-thing-group --thing-group-name logging_level_debugNote: If you are using existing thing groups, then replace logging_level_info and logging_level_debug with the names of your thing groups.The output looks similar to the following message:{ "thingGroupName": "logging_level_info", "thingGroupArn": "arn:aws:iot:eu-west1-1:123456789012:thinggroup/logging_level_info", "thingGroupId": "58dd497e-97fc-47d2-8745-422bb21234AA"}{ "thingGroupName": "logging_level_debug", "thingGroupArn": "arn:aws:iot:eu-west-1:123456789012:thinggroup/logging_level_debug", "thingGroupId": "2a9dc698-9a40-4487-81ec-2cb4101234BB"}2.    Run the SetV2LoggingLevel command to set the logging levels for the thing groups: Note: It can take up to 10 minutes for log level configuration changes to be reflected.aws iot set-v2-logging-level \ --log-target targetType=THING_GROUP,targetName=logging_level_info \ --log-level INFOaws iot set-v2-logging-level \--log-target targetType=THING_GROUP,targetName=logging_level_debug \--log-level DEBUGNote: Replace INFO and DEBUG with the log levels that you want to set for each thing group.3.    Run the following command to confirm that the logging levels are configured correctly:aws iot list-v2-logging-levelsThe output looks similar to the following message:{ "logTargetConfigurations": [ { "logTarget": { "targetType": "DEFAULT" }, "logLevel": "WARN" }, { "logTarget": { "targetType": "THING_GROUP", "targetName": "logging_level_debug" }, "logLevel": "DEBUG" }, { "logTarget": { "targetType": "THING_GROUP", "targetName": "logging_level_info" }, "logLevel": "INFO" } ]}4.    Run the AddThingToThingGroup command to add a thing to the appropriate things group:aws iot add-thing-to-thing-group \ --thing-name YourThingName1 \ --thing-group-name logging_level_infoNote: Replace YourThingName1 with the name of the thing that you are adding to the thing group.Configure manual logging for clients not registered as thingsIf you don't register your things to AWS IoT Core, you can still add resource-specific logging levels for multiple target types. These target types are client attributes and include CLIENT_ID, SOURCE_IP, or PRINCIPAL_ID. If your device is already registered as an AWS IoT Core thing, you can still use these client attributes to manage logging levels.1.    Run the SetV2LoggingLevel command to set the logging level for a specific client:aws iot set-v2-logging-level \ --log-target targetType=CLIENT_ID,targetName=YourClientId \ --log-level YourLogLevelNote: To use a different target type, replace CLIENT_ID with a supported value that is used by the targeted client, such as SOURCE_IP or PRINCIPAL_ID.2.    Run the following command to confirm the logging levels are configured correctly:aws iot list-v2-logging-levelsThe output looks similar to the following message:... { "logTarget": { "targetType": "CLIENT_ID", "targetName": "YourClientId" }, "logLevel": "YourLogLevel" }...Monitoring generated logsIt's a best practice to monitor your IoT logs for issues or problems. You can use either the Amazon CloudWatch Logs Console or the AWS CLI to monitor your AWS IoT Core logs.  For more information, see the "Monitoring log entries" section of How do I best manage the logging levels of my AWS IoT logs in AWS IoT Core?Related informationMonitoring AWS IoTHow do I configure the default logging settings for AWS IoT Core?How do I configure logging levels dynamically for specific resources in AWS IoT Core?Follow"
https://repost.aws/knowledge-center/aws-iot-core-configure-manual-logging
How do I resolve the PHP fatal error that I receive when deploying an application on an Elastic Beanstalk PHP platform that connects to a Microsoft SQL Server database?
I receive a PHP fatal error when deploying an application on an AWS Elastic Beanstalk PHP platform that connects to a Microsoft SQL Server database.
"I receive a PHP fatal error when deploying an application on an AWS Elastic Beanstalk PHP platform that connects to a Microsoft SQL Server database.Short descriptionWhen deploying an application on an Elastic Beanstalk PHP platform that connects to a Microsoft SQL server database, you may receive the following error:"PHP Fatal error: Uncaught Error: Call to undefined function sqlsrv_connect() in /var/app/current/DB/"To connect PHP to a Microsoft SQL Server database, you must first install and configure the SQLSRV library and its PDO extension. By default, this library and extension aren't installed and configured.To install the SQLSRV library and PDO extension, you can use a .ebextensions configuration file that runs a script in your Amazon Elastic Compute Cloud (Amazon EC2) instances. The script does the following:Installs the correct driver and tools for Microsoft SQL ServerTurns on the required PHP libraries and extensionsResolutionNote: The following steps apply to Elastic Beanstalk environments with any PHP platform versions.1.    In the root of your application bundle, create a directory named .ebextensions.2.    Create a .ebextensions configuration file, such as the following .ebextensions/pdo_sqlsrv.config. file:Important: The .ebextensions file used in the following resolution is applicable only for Amazon Linux 1 and Amazon Linux 2 Amazon Machine Image (AMI) instances. The .ebextensions file doesn't apply to Windows or custom Ubuntu AMI instances in Elastic Beanstalk.####################################################################################################### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.#### #### Permission is hereby granted, free of charge, to any person obtaining a copy of this#### software and associated documentation files (the "Software"), to deal in the Software#### without restriction, including without limitation the rights to use, copy, modify,#### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to#### permit persons to whom the Software is furnished to do so.########################################################################################################################################################################################################## THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,#### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A#### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT#### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION#### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE#### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.###################################################################################################commands: install_mssql: command: | #!/bin/bash set -x # 0. EXIT if pdo_sqlsrv is already installed if php -m | grep -q 'pdo_sqlsrv' then echo 'pdo_sqlsrv is already installed' else # 1. Install libtool-ltdl-devel yum -y install libtool-ltdl-devel # 2. Register the Microsoft Linux repository wget https://packages.microsoft.com/config/rhel/8/prod.repo -O /etc/yum.repos.d/msprod.repo # 3. Install MSSQL and tools ACCEPT_EULA=N yum install mssql-tools msodbcsql17 unixODBC-devel -y --disablerepo=amzn* # The license terms for this product can be downloaded from http://go.microsoft.com/fwlink/?LinkId=746949 and found in /usr/share/doc/mssql-tools/LICENSE.txt . By changing "ACCEPT_EULA=N" to "ACCEPT_EULA=Y", you indicate that you accept the license terms. echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc source ~/.bashrc # 4. Install SQLSRV and its PDO extension, and stop pecl/pecl7 from overwriting php.ini cp -f "/etc/php.ini" "/tmp/php.ini.bk" pecl7 install sqlsrv pdo_sqlsrv || pecl install sqlsrv pdo_sqlsrv cp -f "/tmp/php.ini.bk" "/etc/php.ini" # 5. Manually add the extensions to the proper php.ini.d file and fix parameters sqlvar=$(php -r "echo ini_get('extension_dir');") && chmod 0755 $sqlvar/sqlsrv.so && chmod 0755 $sqlvar/pdo_sqlsrv.so echo extension=pdo_sqlsrv.so >> `php --ini | grep "Scan for additional .ini files" | sed -e "s|.*:\s*||"`/30-pdo_sqlsrv.ini echo extension=sqlsrv.so >> `php --ini | grep "Scan for additional .ini files" | sed -e "s|.*:\s*||"`/20-sqlsrv.ini fi3.    Create an application source bundle that includes your .ebextensions file from step 2.4.    Deploy your updated Elastic Beanstalk application.Follow"
https://repost.aws/knowledge-center/elastic-beanstalk-deployment-php-sql
How do I change billing information on the PDF version of the AWS invoice that I receive by email?
I want to change the billing information on the PDF version of the AWS invoice that I receive by email.
"I want to change the billing information on the PDF version of the AWS invoice that I receive by email.ResolutionYour PDF invoice uses the billing information that's associated with your payment method.To update your billing information, complete the following steps:1.    Sign in to the Billing and Cost Management console, and then choose Payment preferences from the navigation pane.2.    Find your default payment method under Default payment preferences, and then choose Edit.3.    Update the information for your current payment method.Note: The name on the Billing Address appears in the ATTN: section of the invoice.4.    Choose Save changes to save your changes.Your updated billing information affects only future invoices. To get an updated PDF invoice for a particular billing period, update the billing information that's associated with your payment method. Then, open a case with AWS Support.Follow"
https://repost.aws/knowledge-center/change-pdf-billing-address
How am I billed for my Amazon EBS snapshots?
I want to know how I'm billed for Amazon Elastic Block Store (Amazon EBS) snapshots.
"I want to know how I'm billed for Amazon Elastic Block Store (Amazon EBS) snapshots.ResolutionCharges for Amazon EBS snapshots are calculated by the gigabyte-month. That is, you are billed for how large the snapshot is and how long you keep the snapshot.Pricing varies depending on the storage tier. For the Standard tier, you're billed only for changed blocks that are stored. For the Archive tier, you're billed for all snapshot blocks that are stored. You're also billed for retrieving snapshots from the Archive tier.The following are example scenarios for each storage tier:Standard tier: You have a volume that's storing 100 GB of data. You're billed for the full 100 GB of data for the first snapshot (snap A). At the time of the next snapshot (snap B), you have 105 GB of data. You're then billed for only the additional 5 GB of storage for incremental snap B.Archive tier: You archive snap B. The snapshot is then moved to the Archive tier, and you're billed for the full 105-GB snapshot block.For detailed pricing information, see Amazon EBS pricing.To view the charges for your EBS snapshots, follow these steps:Open the AWS Billing Dashboard.In the navigation pane, choose Bills.In the Details section, expand Elastic Compute Cloud.You can also use cost allocation tags to track and manage your snapshot costs.Note:You aren't billed for snapshots that another AWS account owns and shares with your account. You're billed only when you copy the shared snapshot to your account. You're also billed for EBS volumes that you create from the shared snapshot.If a snapshot (snap A) is referenced by another snapshot (snap B), then deleting snap B might not reduce the storage costs. When you delete a snapshot, only the data that's unique to that snapshot is removed. Data that's referenced by other snapshots remain, and you are billed for this referenced data. To delete an incremental snapshot, see Incremental snapshot deletion.Related informationWhy did my storage costs not reduce after I deleted a snapshot of my EBS volume and then deleted the volume itself?Why am I being charged for Amazon EBS when all my instances are stopped?Follow"
https://repost.aws/knowledge-center/ebs-snapshot-billing
How do I deploy artifacts to Amazon S3 in a different AWS account using CodePipeline?
I want to deploy artifacts to an Amazon Simple Storage Service (Amazon S3) bucket in a different account. I also want to set the destination account as the object owner. Is there a way to do that using AWS CodePipeline with an Amazon S3 deploy action provider?
"I want to deploy artifacts to an Amazon Simple Storage Service (Amazon S3) bucket in a different account. I also want to set the destination account as the object owner. Is there a way to do that using AWS CodePipeline with an Amazon S3 deploy action provider?ResolutionNote: The following example procedure assumes the following:You have two accounts: a development account and a production account.The input bucket in the development account is named codepipeline-input-bucket (with versioning activated).The default artifact bucket in the development account is named codepipeline-us-east-1-0123456789.The output bucket in the production account is named codepipeline-output-bucket.You're deploying artifacts from the development account to an S3 bucket in the production account.You're assuming a cross-account role created in the production account to deploy the artifacts. The role makes the production account the object owner instead of the development account. To provide the bucket owner in the production account with access to the objects owned by the development account, see the following article: How do I deploy artifacts to Amazon S3 in a different AWS account using CodePipeline and a canned ACL?Create an AWS KMS key to use with CodePipeline in the development accountImportant: You must use the AWS Key Management Service (AWS KMS) customer managed key for cross-account deployments. If the key isn't configured, then CodePipeline encrypts the objects with default encryption, which can't be decrypted by the role in the destination account.1.    Open the AWS KMS console in the development account.2.    In the navigation pane, choose Customer managed keys.3.    Choose Create Key.4.    For Key type, choose Symmetric Key.5.    Expand Advanced Options.6.    For Key material origin, choose KMS. Then, choose Next.7.    For Alias, enter your key's alias. For example: s3deploykey.8.    Choose Next. The Define key administrative permissions page opens.9.    In the Key administrators section, select an AWS Identity and Access Management (IAM) user or role as your key administrator.Choose Next. The Define key usage permissions page opens.11.    In the Other AWS accounts section, choose Add another AWS account.12.    In the text box that appears, add the account ID of the production account. Then, choose Next.Note: You can also select an existing service role in the This Account section. If you select an existing service role, skip the steps in the Update the KMS usage policy in the development account section.13.    Review the key policy. Then, choose Finish.Create a CodePipeline in the development account1.    Open the CodePipeline console. Then, choose Create pipeline.2.    For Pipeline name, enter a name for your pipeline. For example: crossaccountdeploy.Note: The Role name text box is populated automatically with the service role name AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy. You can also choose another, existing service role with access to the KMS key.3.    Expand the Advanced settings section.4.    For Artifact store, select Default location.Note: You can select Custom location if that's required for your use case.5.    For Encryption key, select Customer Managed Key.6.    For KMS customer managed key, select your key's alias from the list (s3deploykey, for this example).Then, choose Next. The Add source stage page opens.For Source provider, choose Amazon S3.8.    For Bucket, enter the name of your development input S3 bucket. For example: codepipeline-input-bucket.Important: The input bucket must have versioning activated to work with CodePipeline.9.    For S3 object key, enter sample-website.zip.Important: To use a sample AWS website instead of your own website, see Tutorial: Create a pipeline that uses Amazon S3 as a deployment provider. Then, search for "sample static website" in the Prerequisites of the 1: Deploy Static Website Files to Amazon S3 section.10.    For Change detection options, choose Amazon CloudWatch Events (recommended). Then, choose Next.11.    On the Add build stage page, choose Skip build stage. Then, choose Skip.12.    On the Add deploy stage page, for Deploy provider, choose Amazon S3.13.    For Region, choose the AWS Region that your production output S3 bucket is in. For example: US East (N. Virginia).Important: If the production output bucket's Region is different than your pipeline's Region, then you must also verify the following:You're using an AWS KMS multi-Region key with multiple replicas.Your pipeline has artifact stores in both Regions.14.    For Bucket, enter the name of your production output S3 bucket. For example: codepipeline-output-bucket.15.    Select the Extract file before deploy check box.Note: If needed, enter a path for Deployment path.16.    Choose Next.17.    Choose Create pipeline. The pipeline runs, but the source stage fails. The following error appears: "The object with key 'sample-website.zip' does not exist."The Upload the sample website to the input bucket section of this article describes how to resolve this error.Update the KMS usage policy in the development accountImportant: Skip this section if you're using an existing CodePipeline service role.1.    Open the AWS KMS console in the development account.2.    Select your key's alias (s3deploykey, for this example).3.    In the Key users section, choose Add.4.    In the search box, enter the service role AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy.5.    Choose Add.Configure a cross-account role in the production accountCreate an IAM policy for the role that grants Amazon S3 permissions to your production output S3 bucket1.    Open the IAM console in the production account.2.    In the navigation pane, choose Policies. Then, choose Create policy.3.    Choose the JSON tab. Then, enter the following policy in the JSON editor:Important: Replace codepipeline-output-bucket with your production output S3 bucket's name.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:Put*" ], "Resource": [ "arn:aws:s3:::codepipeline-output-bucket/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::codepipeline-output-bucket" ] } ]}4.    Choose Review policy.5.    For Name, enter a name for the policy. For example: outputbucketdeployaccess.6.    Choose Create policy.Create an IAM policy for the role that grants the required KMS permissions1.    In the IAM console, choose Create policy.2.    Choose the JSON tab. Then, enter the following policy in the JSON editor:Note: Replace the ARN of the KMS key that you created. Replace codepipeline-us-east-1-0123456789 with the name of the artifact bucket in the development account.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:DescribeKey", "kms:GenerateDataKey*", "kms:Encrypt", "kms:ReEncrypt*", "kms:Decrypt" ], "Resource": [ "arn:aws:kms:us-east-1:<dev-account-id>:key/<key id>" ] }, { "Effect": "Allow", "Action": [ "s3:Get*" ], "Resource": [ "arn:aws:s3:::codepipeline-us-east-1-0123456789/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::codepipeline-us-east-1-0123456789" ] } ]}3.    Choose Review policy.4.    For Name, enter a name for the policy. For example: devkmss3access.5.    Choose Create policy.Create a cross-account role that the development account can assume to deploy the artifacts1.    Open the IAM console in the production account.2.    In the navigation pane, choose Roles. Then, choose Create role.3.    Choose Another AWS account.4.    For Account ID, enter the development account's AWS account ID.5.    Choose Next: Permissions.6.    From the list of policies, select outputbucketdeployaccess and devkmss3access.7.    Choose Next: Tags.8.    (Optional) Add tags, and then choose Next: Review.9.    For Role name, enter prods3role.10.    Choose Create role.11.    From the list of roles, choose prods3role.12.    Choose the Trust relationship. Then, choose Edit Trust relationship.13.    In the Policy Document editor, enter the following policy:Important: Replace dev-account-id with your development account's AWS account ID. Replace AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy with the name of the service role for your pipeline.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<dev-account-id>:role/service-role/AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy" ] }, "Action": "sts:AssumeRole", "Condition": {} } ]}14.    Choose Update Trust Policy.Update the bucket policy for the CodePipeline artifact bucket in the development account1.    Open the Amazon S3 console in the development account.2.    In the Bucket name list, choose the name of your artifact bucket in your development account (for this example, codepipeline-us-east-1-0123456789).3.    Choose Permissions. Then, choose Bucket Policy.4.    In the text editor, update your existing policy to include the following policy statements:Important: To align with proper JSON formatting, add a comma after the existing statements. Replace prod-account-id with your production account's AWS account ID. Replace codepipeline-us-east-1-0123456789 with your artifact bucket's name.{ "Sid": "", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<prod-account-id>:root" }, "Action": [ "s3:Get*", "s3:Put*" ], "Resource": "arn:aws:s3:::codepipeline-us-east-1-0123456789/*"},{ "Sid": "", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<prod-account-id>:root" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::codepipeline-us-east-1-0123456789"}5.    Choose Save.Attach a policy to your CodePipeline service role in the development account that allows it to assume the cross-account role that you created1.    Open the IAM console in the development account.2.    In the navigation pane, choose Policies. Then, choose Create policy.3.    Choose the JSON tab. Then, enter the following policy in the JSON editor:Important: Replace prod-account-id with your production account's AWS account ID.{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": [ "arn:aws:iam::<prod-account-id>:role/prods3role" ] }}4.    Choose Review policy.5.    For Name, enter assumeprods3role.6.    Choose Create policy.7.    In the navigation pane, choose Roles. Then, choose the name of the service role for your pipeline (for this example, AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy).8.    Choose Attach Policies. Then, select assumeprods3role.9.    Choose Attach Policy.Update your pipeline to use the cross-account role in the development accountNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.1.    Retrieve the pipeline definition as a file named codepipeline.json by running the following AWS CLI command:Important: Replace crossaccountdeploy with your pipeline's name.aws codepipeline get-pipeline --name crossaccountdeploy > codepipeline.json2.    Add the cross-account IAM role ARN (roleArn) to the deploy action section of the codepipeline.json file. For more information, see the CodePipeline pipeline structure reference in the CodePipeline User Guide.Example cross-account IAM roleArn"roleArn": "arn:aws:iam::your-prod-account id:role/prods3role",Example deploy action that includes a cross-account IAM role ARNImportant: Replace the prod-account-id with your production account's AWS account ID.{ "name": "Deploy", "actions": [ { "name": "Deploy", "actionTypeId": { "category": "Deploy", "owner": "AWS", "provider": "S3", "version": "1" }, "runOrder": 1, "configuration": { "BucketName": "codepipeline-output-bucket", "Extract": "true" }, "outputArtifacts": [], "inputArtifacts": [ { "name": "SourceArtifact" } ], "roleArn": "arn:aws:iam::<prod-account-id>:role/prods3role", "region": "us-east-1", "namespace": "DeployVariables" } ]}3.    Remove the metadata section at the end of the codepipeline.json file.Important: Make sure that you also remove the comma that's before the metadata section.Example metadata section"metadata": { "pipelineArn": "arn:aws:codepipeline:us-east-1:<dev-account-id>:crossaccountdeploy", "created": 1587527378.629, "updated": 1587534327.983}4.    Update the pipeline by running the following command:aws codepipeline update-pipeline --cli-input-json file://codepipeline.jsonUpload the sample website to the input bucket1.    Open the Amazon S3 console in the development account.2.    In the Bucket name list, choose your development input S3 bucket. For example: codepipeline-input-bucket.3.    Choose Upload. Then, choose Add files.4.    Select the sample-website.zip file that you downloaded earlier.5.    Choose Upload to run the pipeline. When the pipeline runs, the following occurs:The source action selects the sample-website.zip from the development input S3 bucket (codepipeline-input-bucket). Then, the source action places the zip file as a source artifact inside the artifact bucket in the development account ( codepipeline-us-east-1-0123456789).In the deploy action, the CodePipeline service role (AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy) assumes the cross-account role (prods3role) in the production account.CodePipeline uses the cross account role (prods3role) to access the KMS key and artifact bucket in the development account. Then, CodePipeline deploys the extracted files to the production output S3 bucket (codepipeline-output-bucket) in the production account.Note: The production account is the owner of the extracted objects in the production output S3 bucket (codepipeline-output-bucket).Follow"
https://repost.aws/knowledge-center/codepipeline-artifacts-s3
How do I resolve "KMSAccessDeniedException" errors from AWS Lambda?
My AWS Lambda function returned a "KMSAccessDeniedException" error.
"My AWS Lambda function returned a "KMSAccessDeniedException" error.Short descriptionUpdate the AWS Key Management Service (AWS KMS) permissions of your AWS Identity and Access Management (IAM) identity based on the error message.Important: If the AWS KMS key and IAM role belong to different AWS accounts, then both the IAM policy and AWS KMS key policy must be updated.For more information about AWS KMS keys and policy management, see AWS KMS keys.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.To resolve "KMS Exception: UnrecognizedClientExceptionKMS Message" errorsThe following error usually occurs when a Lambda function's execution role is deleted and then recreated using the same name, but with a different principal:Calling the invoke API action failed with this message: Lambda was unable to decrypt the environment variables because KMS access was denied. Please check the function's AWS KMS key settings. KMS Exception: UnrecognizedClientExceptionKMS Message: The security token included in the request is invalid.To resolve the error, you must reset the AWS KMS grant for the function's execution role by doing the following:Note: The IAM user that creates and updates the Lambda function must have permission to use the AWS KMS key.1.    Get the Amazon Resource Name (ARN) of the function's current execution role and AWS KMS key, by running the following AWS CLI command:Note: Replace yourFunctionName with your function's name.$ aws lambda get-function-configuration --function-name yourFunctionName2.    Reset the AWS KMS grant by doing one of the following:Update the function's execution role to a different, temporary value, by running the following update-function-configuration command:Important: Replace temporaryValue with the temporary execution role ARN.$ aws lambda update-function-configuration --function-name yourFunctionName --role temporaryValueThen, update the function's execution role back to the original execution role by running the following command:Important: Replace originalValue with the original execution role ARN.$ aws lambda update-function-configuration --function-name yourFunctionName --role originalValue-or-Update the function's AWS KMS key to a different, temporary value, by running the following update-function-configuration command:Important: Replace temporaryValue with a temporary AWS KMS key ARN. To use a default service key, set the kms-key-arn parameter to "".$ aws lambda update-function-configuration --function-name yourFunctionName --kms-key-arn temporaryValueThen, update the function's AWS KMS key back to the original AWS KMS key ARN by running the following command:Important: Replace originalValue with the original AWS KMS key ARN.$ aws lambda update-function-configuration --function-name yourFunctionName --kms-key-arn originalValueFor more information, see Key policies in AWS KMS.To resolve "KMS Exception: AccessDeniedException KMS Message" errorsThe following error indicates that your IAM identity doesn't have the permissions required to perform the kms:Decrypt API action:Lambda was unable to decrypt your environment variables because the KMS access was denied. Please check your KMS permissions. KMS Exception: AccessDeniedException KMS Message: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.To resolve the error, add the following policy statement to your IAM user or role:Important: Replace "your-KMS-key-arn" with your AWS KMS key ARN.{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "kms:Decrypt", "Resource": "your-KMS-key-arn" } ]}For instructions, see Adding permissions to a user (console) or Modifying a role permissions policy (console), based on your use case.To resolve "You are not authorized to perform" errorsThe following errors indicate that your IAM identity doesn't have one of the permissions required to access the AWS KMS key:You are not authorized to perform: kms:Encrypt.You are not authorized to perform: kms:CreateGrant.User: user-arn is not authorized to perform: kms:ListAliases on resource: * with an explicit deny.Note: AWS KMS permissions aren't required for your IAM identity or the function's execution role if you use the default key policy.To resolve these types of errors, verify that your IAM user or role has the permissions required to perform the following AWS KMS API actions:ListAliasesCreateGrantEncryptDecryptFor instructions, see Adding permissions to a user (console) or Modifying a role permissions policy (console), based on your use case.Example IAM policy statement that grants the permissions required to access a customer-managed AWS KMS keyImportant: The Resource value must be "*". The kms:ListAliases action doesn't support low-level permissions. Also, make sure that you replace "your-kms-key-arn" with your AWS KMS key ARN.{ "Version": "2012-10-17", "Statement": [ { "Sid": "statement1", "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:Encrypt", "kms:CreateGrant" ], "Resource": "your-kms-key-arn" }, { "Sid": "statement2", "Effect": "Allow", "Action": "kms:ListAliases", "Resource": "*" } ]}To resolve "Access to KMS is not allowed" errorsThe following error indicates that an IAM entity doesn't have permissions to get AWS Secrets Manager secrets:Access to KMS is not allowed (Service: AWSSecretsManager; Status Code: 400; Error Code: AccessDeniedException; Request ID: 123a4bcd-56e7-89fg-hij0-1kl2m3456n78)Make sure that your IAM user or role has permissions required to make the following AWS KMS API actions:DecryptGenerateDataKeyFor more information, see How can I resolve issues accessing an encrypted AWS Secrets Manager secret?Related informationHow do I troubleshoot HTTP 502 and HTTP 500 status code (server-side) errors from AWS Lambda?How do I troubleshoot Lambda function failures?Follow"
https://repost.aws/knowledge-center/lambda-kmsaccessdeniedexception-errors
Why is my API Gateway proxy resource with a Lambda authorizer that has caching activated returning HTTP 403 "User is not authorized to access this resource" errors?
"My Amazon API Gateway proxy resource with an AWS Lambda authorizer that has caching activated returns the following HTTP 403 error message: "User is not authorized to access this resource". Why is this happening, and how do I resolve the error?"
"My Amazon API Gateway proxy resource with an AWS Lambda authorizer that has caching activated returns the following HTTP 403 error message: "User is not authorized to access this resource". Why is this happening, and how do I resolve the error?Short descriptionNote: API Gateway can return 403 User is not authorized to access this resource errors for a variety of reasons. This article addresses 403 errors related to API Gateway proxy resources with a Lambda authorizer that has caching activated only. For information on troubleshooting other types of 403 errors, see How do I troubleshoot HTTP 403 errors from API Gateway?A Lambda authorizer's output returns an AWS Identity and Access Management (IAM) policy to API Gateway. The IAM policy includes an explicit API Gateway API "Resource" element that's in the following format:"arn:aws:execute-api:<region>:<account>:<API_id>/<stage>/<http-method>/[<resource-path-name>/[<child-resources-path>]"When Authorization Caching is activated on a Lambda authorizer, the returned IAM policy is cached. The cached IAM policy is then applied to any additional API requests made within the cache's specified time-to-live (TTL) period.If the API has a proxy resource with a greedy path variable of {proxy+}, the first authorization succeeds. Any additional API requests made to a different path within the cache's TTL period fail and return the following error:"message": "User is not authorized to access this resource"The additional requests fail, because the paths don't match the explicit API Gateway API "Resource" element defined in the cached IAM policy.To resolve the issue, you can modify the Lambda authorizer function's code to return a wildcard (*/*) resource in the output instead. For more information, see Resources and conditions for Lambda actions.Note: To activate authorizer caching, your authorizer must return a policy that is applicable to all methods across an API Gateway. The Lambda authorizer function's code must return a wildcard (*/*) resource in the output to allow all resources. The cache policy expects the same resource path cached, unless you made the same request twice on the same resource-path.ResolutionNote: Modify the example Lambda authorizer function code snippets in this article to fit your use case.In the following example setups, the Lambda functions extract the API Gateway's id value from the method's Amazon Resource Name (ARN) ( "event.methodArn"). Then, the functions define a wildcard "Resource" variable by combining the method ARN's paths with the API's id value and a wildcard ( */*).Example token-based Lambda authorizer function code that returns a wildcard "Resource" variableexports.handler = function(event, context, callback) { var token = event.authorizationToken; var tmp = event.methodArn.split(':'); var apiGatewayArnTmp = tmp[5].split('/'); // Create wildcard resource var resource = tmp[0] + ":" + tmp[1] + ":" + tmp[2] + ":" + tmp[3] + ":" + tmp[4] + ":" + apiGatewayArnTmp[0] + '/*/*'; switch (token) { case 'allow': callback(null, generatePolicy('user', 'Allow', resource)); break; case 'deny': callback(null, generatePolicy('user', 'Deny', resource)); break; case 'unauthorized': callback("Unauthorized"); // Return a 401 Unauthorized response break; default: callback("Error: Invalid token"); // Return a 500 Invalid token response }};// Help function to generate an IAM policyvar generatePolicy = function(principalId, effect, resource) { var authResponse = {}; authResponse.principalId = principalId; if (effect && resource) { var policyDocument = {}; policyDocument.Version = '2012-10-17'; policyDocument.Statement = []; var statementOne = {}; statementOne.Action = 'execute-api:Invoke'; statementOne.Effect = effect; statementOne.Resource = resource; policyDocument.Statement[0] = statementOne; authResponse.policyDocument = policyDocument; } // Optional output with custom properties of the String, Number or Boolean type. authResponse.context = { "stringKey": "stringval", "numberKey": 123, "booleanKey": true }; return authResponse;}Example request parameter-based Lambda authorizer function code that returns a wildcard "Resource" variableexports.handler = function(event, context, callback) { // Retrieve request parameters from the Lambda function input: var headers = event.headers; var queryStringParameters = event.queryStringParameters; var pathParameters = event.pathParameters; var stageVariables = event.stageVariables; // Parse the input for the parameter values var tmp = event.methodArn.split(':'); var apiGatewayArnTmp = tmp[5].split('/'); // Create wildcard resource var resource = tmp[0] + ":" + tmp[1] + ":" + tmp[2] + ":" + tmp[3] + ":" + tmp[4] + ":" + apiGatewayArnTmp[0] + '/*/*'; console.log("resource: " + resource); // if (apiGatewayArnTmp[3]) { // resource += apiGatewayArnTmp[3]; // } // Perform authorization to return the Allow policy for correct parameters and // the 'Unauthorized' error, otherwise. var authResponse = {}; var condition = {}; condition.IpAddress = {}; if (headers.headerauth1 === "headerValue1" && queryStringParameters.QueryString1 === "queryValue1" && stageVariables.StageVar1 === "stageValue1") { callback(null, generateAllow('me', resource)); } else { callback("Unauthorized"); }} // Help function to generate an IAM policyvar generatePolicy = function(principalId, effect, resource) { // Required output: console.log("Resource in generatePolicy(): " + resource); var authResponse = {}; authResponse.principalId = principalId; if (effect && resource) { var policyDocument = {}; policyDocument.Version = '2012-10-17'; // default version policyDocument.Statement = []; var statementOne = {}; statementOne.Action = 'execute-api:Invoke'; // default action statementOne.Effect = effect; statementOne.Resource = resource; console.log("***Resource*** " + resource); policyDocument.Statement[0] = statementOne; console.log("***Generated Policy*** "); console.log(policyDocument); authResponse.policyDocument = policyDocument; } // Optional output with custom properties of the String, Number or Boolean type. authResponse.context = { "stringKey": "stringval", "numberKey": 123, "booleanKey": true }; return authResponse;} var generateAllow = function(principalId, resource) { return generatePolicy(principalId, 'Allow', resource);} var generateDeny = function(principalId, resource) { return generatePolicy(principalId, 'Deny', resource);}For more information on how to edit Lambda function code, see Deploying Lambda functions defined as .zip file archives.Related informationEdit code using the console editorFollow"
https://repost.aws/knowledge-center/api-gateway-lambda-authorization-errors
How do I monitor the performance of my Amazon RDS for MySQL DB instance?
I want to monitor the performance of my Amazon Relational Database Service (Amazon RDS) for MySQL DB instance. What's the best way to do this?
"I want to monitor the performance of my Amazon Relational Database Service (Amazon RDS) for MySQL DB instance. What's the best way to do this?Short descriptionThere are several ways that you can monitor your Amazon RDS for MySQL DB instance:Amazon CloudWatchEnhanced MonitoringRDS Performance InsightsSlow query logsTo troubleshoot any issues or multi-point failures, it's a best practice to monitor your DB instance using a variety of these monitoring methods.ResolutionAmazon CloudWatchAmazon CloudWatch provides real-time metrics of your Amazon RDS for MySQL database instance. By default, Amazon RDS metrics are automatically sent to Amazon CloudWatch every 60 seconds. You can also create a usage alarm to watch a single Amazon RDS metric over a specific time period.To monitor Amazon RDS metrics with Amazon CloudWatch, perform the following steps:Note: Metrics are first grouped by the service namespace, and then by the various dimension combinations within each namespace.1.    Open the Amazon CloudWatch console.2.    (Optional) Update your AWS Region. From the navigation bar, choose the AWS Region where your AWS resources exist. For more information, see Regions and endpoints.3.    In the navigation pane, choose Metrics.4.    Choose the RDS metric namespace.5.    Select a metric dimension.6.    (Optional) Sort, filter, update the display of your metrics:To sort your metrics, use the column heading.To create graph view of your metric, select the check box next to the metric.To filter by resource, choose the resource ID, and then choose Add to search.To filter by metric, choose the metric name, and then choose Add to search.Enhanced Monitoring (within 1-5 seconds of granularity interval)When you use Enhanced Monitoring in Amazon RDS, you can view real-time metrics of an operating system that your DB instance runs on.Note: You must create an AWS Identity Access Management (IAM) role that allows Amazon RDS to communicate with Amazon CloudWatch Logs.To enable Enhanced Monitoring in Amazon RDS, perform the following steps:1.    Scroll to the Monitoring section.2.    Choose Enable enhanced monitoring for your DB instance or read replica.3.    For Monitoring Role, specify the IAM role that you created.4.    Choose Default to have Amazon RDS create the rds-monitoring-role role for you.5.    Set the Granularity property to the interval, in seconds, between points when metrics are collected for your DB instance or read replica. The Granularity property can be set to one of the following values: 1, 5, 10, 15, 30, or 60.RDS Performance InsightsNote: If Performance Insights is manually enabled after creating the DB instance, a reboot instance is required to enable Performance Schema. Performance Schema is disabled when the parameter is set to "0" or "1" or the Source column for the parameter is set to "user". When the performance_schema parameter is disabled, Performance Insights displays a DB load that is categorized by the list state of the Amazon RDS MySQL process. To enable the performance_schema parameter, use reset performance_schema parameter.When you use RDS Performance Insights, you can visualize the database load and filter the load by waits, SQL statements, hosts, or users. This way, you can identify which queries are causing issues and view the wait type and wait events associated to that query.You can enable Performance Insights for Amazon RDS for MySQL in the Amazon RDS console.Slow query loggingYou can enable your slow query log by setting the slow_query_log value to "1". (The default value is "0", which means that your slow query log is disabled.) A slow query log records any queries that run longer than the number of seconds specified for the long_query_time metric. (The default value for the long_query_time metric is "10".) For example, to log queries that run longer than two seconds, you can update the number of seconds for the long_query_time metric to a value such as "2".To enable slow query logs for Amazon RDS for MySQL using a custom parameter group, perform the following:1.    Open the Amazon RDS console.2.    From the navigation pane, choose Parameter groups.3.    Choose the parameter group that you want to modify.4.    Choose Parameter group actions.5.    Choose Edit.6.    Choose Edit parameters.7.    Update the following parameters:log_output: If the general log or slow query log is enabled, update the value to "file" for write logs to the file system. Log files are rotated hourly.long_query_time: Update the value to "2" or greater, to log queries that run longer than two seconds or more.slow_query_log: Update the value to "1" to enable logging. (The default value is "0", which means that logging is disabled.)8.    Choose Save Changes.Note: You can modify the parameter values in a custom DB group, but not a default DB parameter group. If you're unable to modify the parameter in a custom DB parameter group, check whether the Value type is set to "Modifiable". For information on how to publish MySQL logs to an Amazon CloudWatch log group, see Publishing MySQL logs to Amazon CloudWatch Logs.Related informationAccessing MySQL database log filesBest practices for configuring parameters for Amazon RDS for MySQL, part 1: Parameters related to performanceHow can I troubleshoot and resolve high CPU utilization on my Amazon RDS for MySQL, MariaDB, or Aurora for MySQL instances?How do I enable and monitor logs for an Amazon RDS MySQL DB instance?Follow"
https://repost.aws/knowledge-center/rds-mysql-db-performance
How can I encrypt a specific folder in my Amazon S3 bucket using AWS KMS?
I want to encrypt a specific folder in my Amazon Simple Storage Service (Amazon S3) bucket with an AWS Key Management Service (AWS KMS) key. How can I do that?
"I want to encrypt a specific folder in my Amazon Simple Storage Service (Amazon S3) bucket with an AWS Key Management Service (AWS KMS) key. How can I do that?ResolutionEncrypting a folder using the Amazon S3 console1.    Open the Amazon S3 console.2.    Navigate to the folder that you want to encrypt.Warning: If your folder contains a large number of objects, you might experience a throttling error. To avoid throttling errors, consider increasing your Amazon S3 request limits on your Amazon S3 bucket. For more troubleshooting tips on throttling errors, see Why am I receiving a ThrottlingExceptions error when making requests to AWS KMS?3.    Select the folder, and then choose Actions.4.    Choose Edit server-side encryption.5.    Select Enable for Enabling Server-side encryption.6.    Choose Encryption key type for your AWS Key Management Service key (SSE-KMS).7.    Select the AWS KMS key that you want to use for folder encryption.Note: The key named aws/s3 is a default key managed by AWS KMS. You can encrypt the folder with either the default key or a custom key.8.    Choose Save changes.Encrypting a folder using the AWS CLINote: You can't change the encryption of an existing folder using an AWS Command Line Interface (AWS CLI) command. Instead, you can run a command that copies the folder over itself with AWS KMS encryption enabled.To encrypt the files using the default AWS KMS key (aws/s3), run the following command:aws s3 cp s3://awsexamplebucket/abc s3://awsexamplebucket/abc --recursive --sse aws:kmsThis command syntax copies the folder over itself with AWS KMS encryption.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.To encrypt the files using a custom AWS KMS key, run the following command:aws s3 cp s3://awsexamplebucket/abc s3://awsexamplebucket/abc --recursive --sse aws:kms --sse-kms-key-id a1b2c3d4-e5f6-7890-g1h2-123456789abcMake sure to specify your own key ID for --sse-kms-key-id.Requiring that future uploads encrypt objects with AWS KMSAfter you change encryption, only the objects that are already in the folder are encrypted. Objects added to the folder after you change encryption can be uploaded without encryption. You can use a bucket policy to require that future uploads encrypt objects with AWS KMS.For example:{ "Version": "2012-10-17", "Id": "PutObjPolicy", "Statement": [ { "Sid": "DenyIncorrectEncryptionHeader", "Effect": "Deny", "Principal": "*", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::awsexamplebucket/awsexamplefolder/*", "Condition": { "StringNotEquals": { "s3:x-amz-server-side-encryption": "aws:kms" } } }, { "Sid": "DenyUnEncryptedObjectUploads", "Effect": "Deny", "Principal": "*", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::awsexamplebucket/awsexamplefolder/*", "Condition": { "Null": { "s3:x-amz-server-side-encryption": true } } } ]}This bucket policy denies access to s3:PutObject on docexamplebucket/docexamplefolder/* unless the request includes server-side encryption with AWS KMS.Related informationProtecting data using server-side encryption with AWS KMS CMKS (SSE-KMS)Follow"
https://repost.aws/knowledge-center/s3-encrypt-specific-folder
How do I verify the authenticity of Amazon SNS messages that are sent to HTTP and HTTPS endpoints?
"I'm sending notifications to an HTTPS—or HTTP—endpoint using Amazon Simple Notification Service (Amazon SNS). I want to prevent spoofing attacks, so how do I verify the authenticity of the Amazon SNS messages that my endpoint receives?"
"I'm sending notifications to an HTTPS—or HTTP—endpoint using Amazon Simple Notification Service (Amazon SNS). I want to prevent spoofing attacks, so how do I verify the authenticity of the Amazon SNS messages that my endpoint receives?ResolutionIt's a best practice to use certificate-based signature validation when verifying the authenticity of an Amazon SNS notification. For instructions, see Verifying the signatures of Amazon SNS messages in the Amazon SNS Developer Guide.To help prevent spoofing attacks, make sure that you do the following when verifying Amazon SNS message signatures:Always use HTTPS to get the certificate from Amazon SNS.Validate the authenticity of the certificate.Verify that the certificate was sent from Amazon SNS.(When possible) Use one of the supported AWS SDKs for Amazon SNS to validate and verify messages.Example message bodyThe following is an example message payload string sent from Amazon SNS:{"Type" : "Notification","MessageId" : "e1f2a232-e8ce-5f0a-b5d3-fbebXXXXXXXX","TopicArn" : "arn:aws:sns:us-east-1:XXXXXXXX:SNSHTTPSTEST","Subject" : "Test","Message" : "TestHTTPS","Timestamp" : "2021-10-07T18:55:19.793Z","SignatureVersion" : "1","Signature" : "VetoDxbYMh0Ii/87swLEGZt6FB0ZzGRjlW5BiVmKK1OLiV8B8NaVlADa6ThbWd1s89A4WX1WQwJMayucR8oYzEcWEH6//VxXCMQxWD80rG/NrxLeoyas4IHXhneiqBglLXh/R9nDZcMAmjPETOW61N8AnLh7nQ27O8Z+HCwY1wjxiShwElH5/+2cZvwCoD+oka3Gweu2tQyZAA9ergdJmXA9ukVnfieEEinhb8wuaemihvKLwGOTVoW/9IRMnixrDsOYOzFt+PXYuKQ6KGXpzV8U/fuJDsWiFa/lPHWw9pqfeA8lqUJwrgdbBS9vjOJIL+u2c49kzlei8zCelK3n7w==","SigningCertURL" : "https://sns.us-east-1.amazonaws.com/SimpleNotificationService-7ff5318490ec183fbaddaa2aXXXXXXXX.pem","UnsubscribeURL" : "https://sns.us-east-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-1:XXXXXXXX:SNSHTTPSTEST:b5ab2db8-7775-4852-bd1a-2520XXXXXXXX","MessageAttributes" : {"surname" : {"Type":"String","Value":"SNSHTTPSTest"}}}For more information on message formats that Amazon SNS uses, refer to Parsing message formats.Related informationFanout to HTTP/S endpointsUsing AWS Lamba with Amazon SNSWhat's the Amazon SNS IP address range?Follow"
https://repost.aws/knowledge-center/sns-verify-message-authenticity
How do I migrate from a NAT instance to a NAT gateway?
"I need to migrate from a NAT instance to a NAT gateway, and I want the migration done with minimal downtime."
"I need to migrate from a NAT instance to a NAT gateway, and I want the migration done with minimal downtime.Short descriptionWhen creating a migration plan, consider the following:Do you plan to use the same Elastic IP address for the NAT gateway as currently used by the NAT instance? A new Elastic IP address might not be recognized by external clients.Is your NAT instance performing other functions, such as port forwarding, custom scripts, providing VPN services, or acting as bastion host? A NAT gateway allows instances in a private subnet to connect to the internet or other AWS services. Internet connections towards the NAT gateway are not allowed. It can’t be used for any other functions.Have you configured your NAT instance security groups and your NAT gateway network access control lists (network ACLs) appropriately? You can use security groups on the NAT instance and network ACLs on the NAT instance subnet to control traffic to and from the NAT subnet. You can only use a network ACL to control the traffic to and from the subnet in which the NAT gateway is located.Do your current NAT instances provide high availability across Availability Zones? If so, you might want to create a Multi-AZ architecture. You can do this by creating a NAT gateway in each Availability Zone. Next, configure your private subnet route-tables in a specific Availability Zone to use the NAT gateway from the same Availability Zone. Multi-AZ is useful if you want to avoid charges for inter-AZ traffic.Do you have tasks running through the NAT instance? When the routing is changed from the NAT instance, existing connections are dropped, and the connections must be reestablished.Does your architecture support testing the instance migrations individually? If so, migrate one NAT instance to a NAT gateway and check the connectivity before migrating other instances.Do you allow incoming traffic from ports 1024 - 65535 on the NAT instance's network ACL? You must allow traffic from ports 1024 - 65535 because the NAT gateway uses these as source ports. To learn more, see VPC with public and private subnets (NAT).ResolutionDisassociate the Elastic IP address from the existing NAT instance.Create a NAT gateway in the public subnet for the NAT instance that you want to replace. You can do this with the disassociated Elastic IP address, or with a new Elastic IP address.Review the route tables that refer to the NAT instance or the elastic network interface of the NAT instance. Then edit the route to point to the newly created NAT gateway instead.Note: Repeat this process for every NAT instance and subnet that you want to migrate.Access one of the Amazon Elastic Compute Cloud (Amazon EC2) instances in the private subnet and verify connectivity to the internet.After you have successfully migrated to the NAT gateway and have verified connectivity, you can terminate the NAT instances.Related informationCompare NAT gateways and NAT instancesMigrate from a NAT instance to a NAT gatewayNAT gatewaysHow do I set up a NAT gateway for a private subnet in Amazon VPC?Troubleshoot NAT gatewaysFollow"
https://repost.aws/knowledge-center/migrate-nat-instance-gateway
Why am I experiencing intermittent connectivity issues with my Amazon Redshift cluster?
I'm experiencing intermittent connectivity issues when I try to connect to my Amazon Redshift cluster. Why is this happening and how do I troubleshoot this?
"I'm experiencing intermittent connectivity issues when I try to connect to my Amazon Redshift cluster. Why is this happening and how do I troubleshoot this?Short descriptionIntermittent connectivity issues in your Amazon Redshift cluster are caused by the following:Restricted access for a particular IP address or CIDR blockMaintenance window updatesNode failures or scheduled administration tasksEncryption key rotationsToo many active network connectionsHigh CPU utilization of the leader nodeClient-side connection issuesResolutionRestricted access for a particular IP address or CIDR blockCheck to see if there is restricted access for a particular IP address or CIDR block in your security group. Because of DHCP configuration, your client IP address can change, which can cause connectivity issues. Additionally, if you aren't using elastic IP addresses for your Amazon Redshift cluster, the AWS managed IP address of your cluster nodes might change. For example, your IP address can change when you delete your cluster and then recreate it from a snapshot, or when you resume a paused cluster.Note: Public IP addresses are rotated when the Amazon Redshift cluster is deleted and recreated. Private IP addresses change whenever nodes are replaced.To resolve any network restrictions, consider the following approaches:If your application is caching the public IP address behind a cluster endpoint, be sure to use this endpoint for your Amazon Redshift connection. To be sure of the stability and security in your network connection, avoid using a DNS cache for your connection.It's a best practice to use an elastic IP address for your Amazon Redshift cluster. An elastic IP address allows you to change your underlying configuration without affecting the IP address that clients use to connect to your cluster. This approach is helpful if you are recovering a cluster after a failure. For more information, see Managing clusters in a VPC.If you're using a private IP address to connect to a leader node or compute node, be sure to use the new IP address. For example, if you performed SSH ingestion or have an Amazon EMR configuration that uses the compute node, update your settings with the new IP address. A new private IP address is granted to new nodes after a node replacement.Maintenance window updatesCheck the maintenance window for your Amazon Redshift cluster. During a maintenance window, your Amazon Redshift cluster is unable to process read or write operations. If a maintenance event is scheduled for a given week, it starts during the assigned 30-minute maintenance window. While Amazon Redshift is performing maintenance, any queries or other operations that are in progress are shut down. You can change the scheduled maintenance window from the Amazon Redshift console.Node failures or scheduled administration tasksFrom the Amazon Redshift console, check the Events tab for any node failures or scheduled administration tasks (such as a cluster resize or reboot).If there is a hardware failure, Amazon Redshift might be unavailable for a short period, which can result in failed queries. When a query fails, you see an Events description such as the following:"A hardware issue was detected on Amazon Redshift cluster [cluster name]. A replacement request was initiated at [time]."Or, if an account administrator scheduled a restart or resize operation on your Amazon Redshift cluster, intermittent connectivity issues can occur. Your Events description then indicates the following:"Cluster [cluster name] began restart at [time].""Cluster [cluster name] completed restart at [time]."For more information, seeAmazon Redshift event categories and event messages.Encryption key rotationsCheck your key management settings for your Amazon Redshift cluster. Verify whether you are using AWS Key Management Service (AWS KMS) key encryption and key encryption rotation.If your encryption key is enabled and the encryption key is being rotated, then your Amazon Redshift cluster is unavailable during this time. As a result, you receive the following error message:"pg_query(): Query failed: SSL SYSCALL error: EOF detected"The frequency of your key rotation depends on your environment's policies for data security and standards. Rotate the keys as often as needed or whenever the encrypted key might be compromised. Also, be sure to have a key management plan that supports both your security and cluster availability needs.Too many active connectionsIn Amazon Redshift, all connections to your cluster are sent to the leader node, and there is a maximum limit for active connections. The maximum limit that your Amazon Redshift cluster can support is determined by node type (instead of node count).When there are too many active connections in your Amazon Redshift cluster, you receive the following error:"[Amazon](500310) Invalid operation: connection limit "500" exceeded for non-bootstrap users"If you receive anInvalid operation error while connecting to your Amazon Redshift cluster, it indicates that you have reached the connection limit. You can check the number of active connections for your cluster by looking at theDatabaseConnections metric in Amazon CloudWatch.If you notice a spike in your database connections, there might be a number of idle connections in your Amazon Redshift cluster. To check the number of idle connections, run the following SQL query:select trim(a.user_name) as user_name, a.usesysid, a.starttime, datediff(s,a.starttime,sysdate) as session_dur, b.last_end, datediff(s,case when b.last_end is not null then b.last_end else a.starttime end,sysdate) idle_dur FROM(select starttime,process,u.usesysid,user_name from stv_sessions s, pg_user u where s.user_name = u.usename and u.usesysid>1and process NOT IN (select pid from stv_inflight where userid>1 union select pid from stv_recents where status != 'Done' and userid>1)) a LEFT OUTER JOIN (select userid,pid,max(endtime) as last_end from svl_statementtext where userid>1 and sequence=0 group by 1,2) b ON a.usesysid = b.userid AND a.process = b.pidWHERE (b.last_end > a.starttime OR b.last_end is null)ORDER BY idle_dur;The output looks like this example:process | user_name | usesysid | starttime | session_dur | last_end | idle_dur---------+------------+----------+---------------------+-------------+----------+---------- 14684 | myuser | 100 | 2020-06-04 07:02:36 | 6 | | 6(1 row)When the idle connections are identified, the connection can be shut down using the following command syntax:select pg_terminate_backend(process);The output looks like this example:pg_terminate_backend ---------------------- 1(1 row)High CPU utilization of the leader nodeAll clients connect to an Amazon Redshift cluster using a leader node. High CPU utilization of the leader node can result in intermittent connection issues.If you try to connect to your Amazon Redshift cluster and the leader node is consuming high CPU, you receive the following error message:"Error setting/closing connection"To confirm whether your leader node has reached high CPU utilization, check the CPUUtilization metric in Amazon CloudWatch. For more information, see Amazon Redshift metrics.Client-side connection issuesCheck for a connection issue between the client (such as Workbench/J or PostgreSQL) and server (your Amazon Redshift cluster). A client-side connection reset might occur, if your client is trying to send a request from a port that has been released. As a result, the connection reset can cause intermittent connection issues.To prevent these client-side connection issues, consider the following approaches:Use the keepalive feature in Amazon Redshift to check that the connection between the client and server are operating correctly. The keepalive feature also helps to prevent any connection links from being broken. To check or configure the values for keepalive, see Change TCP/IP timeout settings and Change DSN timeout settings.Check the maximum transition unit (MTU) if your queries appear to be running but hang in the SQL client tool. Sometimes, the queries fail to appear in Amazon Redshift because of a packet drop. A packet drop occurs when there are different MTU sizes in the network paths between two IP hosts. For more information about how to manage packet drop issues, see Queries appear to hang and sometimes fail to reach the cluster.Follow"
https://repost.aws/knowledge-center/redshift-intermittent-connectivity
"Why am I getting the error “The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access” when I download or copy an object from my Amazon S3 bucket?"
"I'm getting the following error when I try to download or copy an object from my Amazon Simple Storage Service (Amazon S3) bucket: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access."
"I'm getting the following error when I try to download or copy an object from my Amazon Simple Storage Service (Amazon S3) bucket: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.ResolutionYou get this error when both the following conditions are true:The object that's stored in the bucket where you are making requests to is encrypted with an AWS Key Management Service (AWS KMS) key.The AWS Identity and Access Management (IAM) role or user that's making the requests doesn't have sufficient permissions to access the AWS KMS key that's used to encrypt the objects.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent version of the AWS CLI.You can check the encryption on an object using the AWS CLI command head-object:aws s3api head-object --bucket my-bucket --key my-objectBe sure to do the following in the preceding command:Replace my-bucket with the name of your bucket.Replace my-object with the name of your object.The output for this command looks like the following:{ "AcceptRanges": "bytes", "ContentType": "text/html", "LastModified": "Thu, 16 Apr 2015 18:19:14 GMT", "ContentLength": 77, "VersionId": "null", "ETag": "\"30a6ec7e1a9ad79c203d05a589c8b400\"", "ServerSideEncryption": "aws:kms", "Metadata": {}, "SSEKMSKeyId": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab", "BucketKeyEnabled": true}The SSEKMSKeyId field in the output specifies the AWS KMS key that was used to encrypt the object.To resolve this error, do either of the following:Be sure that the policy that's attached to the IAM user or role has the required permissions. Example:{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": [ "kms:DescribeKey", "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": [ "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab" ] }}Be sure that the AWS KMS policy has the required permissions. Example:{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AWS-account-ID:user/user-name-1" }, "Action": [ "kms:DescribeKey", "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "*" }}If the IAM user or role and AWS KMS key are from different AWS accounts, then be sure of the following:The policy that's attached to the IAM entity has the required AWS KMS permissions.The AWS KMS key policy grants the required permissions to the IAM entity.Important: You can't use the AWS managed keys in cross-account use cases because the AWS managed key policies can't be modified.To get detailed information about an AWS KMS key, run the describe-key command:aws kms describe-key --key-id arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890abYou can also use the AWS KMS console to view details about an AWS KMS key.Note: Be sure that the AWS KMS key that's used to encrypt the object is enabled.Related informationMy Amazon S3 bucket has default encryption using a custom AWS KMS key. How can I allow users to download from and upload to the bucket?Do I need to specify the AWS KMS key when I download a KMS-encrypted object from Amazon S3?Follow"
https://repost.aws/knowledge-center/s3-download-object-ciphertext-error
How do I configure GoSH on an Amazon RDS instance that is running MySQL?
I have an Amazon Relational Database Service (Amazon RDS) instance that is running MySQL. I want to turn on and configure Global Status History (GoSH) on my RDS DB instance. How can I do this?
"I have an Amazon Relational Database Service (Amazon RDS) instance that is running MySQL. I want to turn on and configure Global Status History (GoSH) on my RDS DB instance. How can I do this?Short descriptionYou can use GoSH to maintain the history of different status variables in Amazon RDS for MySQL. First, you must turn on an event scheduler before you can use GoSH. Then, you can modify GoSH to run at specific intervals and to rotate tables regularly. By default, the GoSH information is collected every five minutes, stored in the mysql.rds_global_status_history table, and the table is rotated every seven days.Resolution1.    Modify the custom DB parameter group attached to the instance so that event_scheduler is set to ON.2.    Log in to your DB instance, and then run this command:SHOW PROCESSLIST;SHOW GLOBAL VARIABLES LIKE 'event_scheduler';3.    Turn on GoSH by running this command:CALL mysql.rds_enable_gsh_collector;4.    To modify the monitoring interval to one minute, run this command:CALL mysql.rds_set_gsh_collector(1);5.    Turn on rotation for the GoSH tables by running this command:CALL mysql.rds_enable_gsh_rotation;6.    Modify the rotation by running this command:CALL mysql.rds_set_gsh_rotation(5);Query the GoSH tables to fetch information about specific operations. For example, the following query provides details about the number of Data Manipulation Language (DML) operations performed on the instance every minute.SELECT collection_start, collection_end, sum(value) AS 'DML Queries Count' from (select collection_start, collection_end, "INSERTS" as "Operation", variable_Delta as "value" from mysql.rds_global_status_history where variable_name = 'com_insert' union select collection_start, collection_end, "UPDATES" as "Operation", variable_Delta as "value" from mysql.rds_global_status_history where variable_name = 'com_update' union select collection_start, collection_end, "DELETES" as "Operation", variable_Delta as "value" from mysql.rds_global_status_history where variable_name = 'com_delete') a group by 1,2;Note: This query is not applicable for MySQL 8.0.Related informationCommon DBA tasks for MySQL DB instancesManaging the global status historyFollow"
https://repost.aws/knowledge-center/enable-gosh-rds-mysql
How do I allow access to my Amazon S3 buckets to customers who do not use TLS 1.2 or higher?
"My customers don't use TLS versions 1.2 or higher, so they can't access content that's stored in my Amazon Simple Storage Service (Amazon S3) buckets. I want to allow these customers to access content in my Amazon S3 buckets using TLS 1.0 or 1.1."
"My customers don't use TLS versions 1.2 or higher, so they can't access content that's stored in my Amazon Simple Storage Service (Amazon S3) buckets. I want to allow these customers to access content in my Amazon S3 buckets using TLS 1.0 or 1.1.Short descriptionAWS enforces the use of TLS 1.2 or higher on all AWS API endpoints. To continue to connect to AWS services, update all software that uses TLS 1.0 or 1.1.ResolutionAmazon CloudFront allows the use of older TLS versions by abstracting customers from the TLS protocol that's used between your CloudFront distribution and Amazon S3.Create a CloudFront distribution with OACWith CloudFront, you can support anonymous and public requests to your S3 buckets. Or, you can make your S3 buckets private and accessible through CloudFront only by requiring signed requests to access your S3 buckets.Support anonymous and public requests to your S3 bucketsNote: The following example assumes that you already have an S3 bucket in use. If you don't have an S3 bucket, then create one.To create the CloudFront distribution, follow these steps:Open the CloudFront console.Choose Create Distribution.Under Origin, for Origin domain, choose your S3 bucket's REST API endpoint from the dropdown list.For Viewer protocol policy, select Redirect HTTP to HTTPS.For Allowed HTTP endpoints, select GET, HEAD, OPTIONS to support read requests.In the Origin access section, select Origin access control settings (recommended).Select Create control setting, and use the default name. For the signing behavior, select Sign requests (recommended), and select Create. The OAC recommended settings automatically authenticates the viewer's request.Select the identity in the dropdown list. After the distribution is created, update the bucket policy to restrict access to OAC.Under Default cache behavior, Viewer, select Redirect HTTP to HTTPS for Viewer Protocol Policy, and leave the other settings as default.Under Cache key and origin requests, select Cache policy and origin request policy (recommended). Then, use CachingOptimized for the Cache policy and CORS-S3Origin for the Origin request policy.Select Create distribution, and then wait for its status to update to Enabled.Require signed requests to access your S3 bucketsAdd security to your S3 buckets by supporting signed requests only. With signed requests, OAC follows your authentication parameters and forwards them to the S3 origin, which then denies anonymous requests.To create a CloudFront distribution that requires signed requests to access your S3 buckets, follow these steps:Open the CloudFront console.Choose Create Distribution.Under Origin, for Origin domain, choose your S3 bucket's REST API endpoint from the dropdown list.For Viewer protocol policy, select Redirect HTTP to HTTPS.For Allowed HTTP endpoints, select GET, HEAD, OPTIONS to support read requests.In the Origin access section, select Origin access control settings (recommended).Block all unsigned requests by checking the Do not sign requests option.Note: Blocking unsigned requests requires every customer to sign their requests so that the S3 origin can evaluate the permissions.Create a custom cache policy to forward the customer's Authorization header to the origin.Under Cache key and origin requests, select Cache policy and origin request policy (recommended).Select Create Policy.Enter a name for the cache policy in the Name section.Under Cache key settings, go to Headers, and select Include the following headers.Under Add Header, select Authorization.Select Create.Control the customer's security policyTo control a security policy in CloudFront, you must have a custom domain. It's a best practice to specify an alternate domain name for your distribution. It's also a best practice to use a custom SSL certificate that's configured in AWS Certificate Manager (ACM). Doing so gives you more control over the security policy, and allows customers to continue to use TLS 1.0. For more information, see Supported protocols and ciphers between viewers and CloudFront.If you use the default *.cloudfront.net domain name, then CloudFront automatically provisions a certificate and sets the security policy to allow TLS 1.0 and 1.1. For more information, see Distribution settings.To configure an alternate domain name for your CloudFront distribution, follow these steps:Sign in to the AWS Management Console, and then open the CloudFront console.Choose the ID for the distribution that you want to update.On the General tab, choose Edit.For Alternate Domain Names (CNAMEs), choose Add item, and enter your domain name.Note: It's a best practice to use a custom canonical name record (CNAME) to access your resources. Using a CNAME gives you greater control over routing, and allows a better transition for your customers.For Custom SSL Certificate, choose the custom SSL certificate from the dropdown list that covers your CNAME to assign it to the distribution.Note: For more information on installing a certificate, see How do I configure my CloudFront distribution to use an SSL/TLS certificate?Choose Create distribution, and wait for its status to update to Enabled.After you create the distribution, you must allow OAC to access your bucket. Complete the following steps:Navigate to the CloudFront console page, and open your CloudFront distribution.Select the Origins tab, select your origin, and then click Edit.Choose Copy policy, open the bucket permission, and update your bucket policy.Open the Go to S3 bucket permissions page.Under Bucket policy, choose Edit. Paste the policy that you copied earlier, and then choose Save. If your bucket policy requires more than reading from S3, then you can add the required APIs.If you use a custom domain name, then change your DNS entries to use the new CloudFront distribution URL. If you don't use a custom domain name, then you must provide the new CloudFront distribution URL to your users. Also, you must update any client or device software that uses the old URL.If you're using an AWS SDK to access Amazon S3 objects, then you must change your code to use regular HTTPS endpoints. Also, make sure that you use the new CloudFront URL. If the objects aren't public and require better control, then you can serve private content with signed URLs and signed cookies.Use S3 presigned URLs to access objectsIf your workflow relies on S3 presigned URLs, then use a CloudFront distribution to relay your query to the S3 origin. First, generate a presigned URL for the object you want. Then, replace the host in the URL with the CloudFront endpoint to deliver the call through CloudFront and automatically upgrade the encryption protocol. To test and generate a presigned URL, run the following CLI command:aws s3 presign s3://BUCKET_NAME/test.jpgExample output:https://bucket_name.s3.us-east-1.amazonaws.com/test.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=%5b...%5d%2F20220901%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=%5b...%5d&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature">https://BUCKET_NAME.s3.us-east-1.amazonaws.com/test.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=[...]%2F20220901%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=[...]&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature =[...]Now change the S3 URL to the new CloudFront endpoint. For example, replace this S3 URL:BUCKET_NAME.s3.eu-west-1.amazonaws.comwith this endpoint:https://DISTRIBUTION_ID.cloudfront.net.Example output:https://<DISTRIBUTION_ID>.cloudfront.net /test.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=[...]%2F20220901%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=[...]&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature=[...]To use presigned URLs, apply the following CloudFront settings:Set the OAC signing behavior to Do not sign requests.Set the CloudFront distribution origin request policy to Origin request settings: Headers – None; Cookies – None; Query strings – All.Set the cache policy to Headers – None; Cookies – None; Query strings – None.In AWS CloudTrail, the GET request to download from an S3 presigned URL shows as the identity that generated the presigned URL.If you're using an AWS SDK to access S3 objects, then you must change your code to use the presigned URL. Use a regular HTTPS request instead, and use the new CloudFront URL.Confirm that you're using modern encryption protocols for Amazon S3To test your new policy, use the following example curl command to make HTTPS requests using a specific legacy protocol:curl https://${CloudFront_Domain}/image.png -v --tlsv1.0 --tls-max 1.0The example curl command makes a request to CloudFront using TLS 1.0. This connects to the S3 origin using TLS 1.2 and successfully downloads the file.It's a best practice to use AWS CloudTrail Lake to identify older TLS connections to AWS service endpoints. You can configure the CloudTrail Lake event data store to capture management events or data events. The corresponding CloudTrail event in CloudTrail Lake shows TLS version 1.2, confirming that your customers use modern security policy to connect to Amazon S3.Follow"
https://repost.aws/knowledge-center/s3-access-old-tls
"After I use AWS Organizations to create a member account, how do I access that account?"
"I used AWS Organizations to create a member account in my Organization, and I want to access that account."
"I used AWS Organizations to create a member account in my Organization, and I want to access that account.Short descriptionWhen you create a member account with Organizations, you must specify an email address, an AWS Identity and Access Management (IAM) role, and an account name. If a role name isn't specified, then a default name is assigned: OrganizationAccountAccessRole. To switch to the IAM role and access the member account, use the Organizations console.ResolutionIn the Organizations console, member accounts appear under the Accounts tab. Note the account number, email address, and IAM role name of the member account that you want to access. You can access the member account using either the IAM role or the AWS account root user credentials.Option one: Use the IAM Role1.    Open the AWS Management Console using IAM user credentials.2.    Choose your account name at the top of the page, and then select Switch role.Important: If you signed in with root user credentials, then you can't switch roles. You must sign in as an IAM user or role. For more information, see Switching to a role (console).3.    Enter the account number and role name for the member account.4.    (Optional) You can also enter a custom display name (maximum 64 characters) and a display color for the member account.5.    Choose Switch role.Option two: Use the root user credentialsWhen you create a new member account, Organizations sets an initial password for that account that can't be retrieved. To access the account as the root user for the first time, follow these instructions to reset the initial password:1.    Follow the instructions for Accessing a member account as the root user.2.    After you receive the reset password email, choose the Reset password link.3.    Open the AWS Management Console using the root user name and the new password.For more information, see How do I recover a lost or forgotten AWS password?Note: It's a best practice to use the root user only to create IAM users, groups, and roles. It's also a best practice to use multi-factor authentication for your root user.Related informationAccessing and administering the member accounts in your organizationRemoving a member account from your organizationI can't assume a roleFollow"
https://repost.aws/knowledge-center/organizations-member-account-access
Why am I unable to change the maintenance track for my Amazon Redshift provisioned cluster?
I'm unable to change the maintenance track for my Amazon Redshift provisioned cluster.
"I'm unable to change the maintenance track for my Amazon Redshift provisioned cluster.Short descriptionAmazon Redshift periodically performs maintenance to apply upgrades to your cluster hardware or to perform software patches. During these updates, your Amazon Redshift cluster isn't available for normal operations. If a scheduled maintenance occurs while a query is running, then the query is terminated and rolled back.Note: The following is applicable to only provisioned Amazon Redshift clusters.If you have planned deployments for large data loads, ANALYZE, or VACUUM operations, you can defer maintenance for up to 45 days.Important: You can't defer maintenance after the maintenance window has started.You can make changes to the maintenance track to control the cluster version applied during a maintenance window. There are three maintenance tracks to choose from:Current – Use the most current approved cluster version.Trailing – Use the cluster version before the current version.Preview – Use the cluster version that contains new features available for preview.Changes to maintenance tracks aren't allowed for the following situations:A Redshift cluster requires a hardware upgrade or a node of a Redshift cluster needs to be replaced.Mandatory upgrades or patches are required for a Redshift cluster.The maintenance track can't be set to Trailing for a Redshift cluster with the most current cluster version.If the Redshift provisioned cluster maintenance track is set to Preview, then changes from one Preview to another Preview track isn’t allowed.If the Redshift provisioned cluster track is set to Current or Trailing, then you can't change the maintenance track to Preview.ResolutionNote: If a mandatory maintenance window is required for your Redshift cluster, AWS will send a notification before the start of the maintenance window.Redshift cluster requires hardware upgrade or a node of a Redshift cluster needs to be replacedEach new version change can include updates to the operating system, security, and functionality. AWS will send a notification and make the required changes. This happens automatically when there's a hardware update, or another mandatory update, and the cluster maintenance track is set to Current.Your Amazon Redshift cluster isn't available during the maintenance window.Mandatory upgrades or patches are required for Redshift clusterMandatory upgrades or patches are deployed for a particular cluster or for all clusters in an AWS Region. You will receive a notification before the mandatory upgrade or required patch.AWS requires at least a 30-minute window in your instance's weekly schedule to confirm that all instances have the latest patches and upgrades. During the maintenance window, tasks are performed on clusters and instances. For the security and stability of your data, maintenance can cause instances to be unavailable.Maintenance track can't be set to trailing for a Redshift cluster with the most current cluster versionIf your cluster maintenance track isn't changing to Trailing, it's because your cluster is already using the most current approved cluster version. You must wait until the next new release becomes available for the current version to trail. After a new cluster version is released, you can change your cluster's maintenance track to Trailing and it will stay in Trailing for future maintenance. For more information, see Choosing cluster maintenance tracks.Redshift provisioned cluster maintenance track set to previewIf your cluster maintenance track is set to use the Preview track, then switching from one Preview track to another isn't allowed.If you restore a new Redshift cluster from a snapshot of an older cluster that used the Preview track, the following happens:The restored Redshift cluster inherits the source cluster’s maintenance track.The restored Redshift cluster can’t be changed to a different type of Preview maintenance track.Redshift provisioned cluster maintenance track set to current or trailingIf Current or Trailing is selected for a provisioned Redshift cluster, then the maintenance track can't be changed to the Preview track.Related informationRolling back the cluster versionWhy didn't Amazon Redshift perform any upgrades during the maintenance window?Follow"
https://repost.aws/knowledge-center/redshift-change-maintenance-track
How can I serve multiple domains from a CloudFront distribution over HTTPS?
I want to serve multiple domains from an Amazon CloudFront distribution over HTTPS.
"I want to serve multiple domains from an Amazon CloudFront distribution over HTTPS.ResolutionTo serve multiple domains from CloudFront over HTTPS, add the following values to your distribution settings:Enter all domain names in the Alternate Domain Names (CNAMEs) field. For example, to use the domain names example1.com and example2.com, enter both domain names in Alternate Domain Names (CNAMEs).Note: Choose Add item to add each domain name on a new line.Add your SSL certificate that covers all the domain names. You can add a certificate that's requested with AWS Certificate Manager (ACM). Or, you can add a certificate that's imported to either AWS Identity and Access Management (IAM) or ACM.Note: It's a best practice to import your certificate to ACM. However, you can also import your certificate in the IAM certificate store.For each the domain name, configure your DNS service so that the alternate domain names route traffic to the CloudFront domain name for your distribution. For example, configure example1.com and example2.com to route traffic to d111111abcdef8.cloudfront.net.Note: You can't use CloudFront to route to a specific origin based on the alternate domain name. CloudFront natively supports routing to a specific origin based only on the path pattern. However, you can use Lambda@Edge to route to an origin based on the Host header. For more information, see Dynamically route viewer requests to any origin using Lambda@Edge.Related informationValues that you specify when you create or update a distributionUsing custom URLs by adding alternate domain names (CNAMEs)Follow"
https://repost.aws/knowledge-center/multiple-domains-https-cloudfront
"How can I determine why I was charged for CloudWatch usage, and then how can I reduce future charges?"
"I'm seeing high Amazon CloudWatch charges in my AWS bill. How can I see why I was charged for CloudWatch usage, and then how can I reduce future charges?"
"I'm seeing high Amazon CloudWatch charges in my AWS bill. How can I see why I was charged for CloudWatch usage, and then how can I reduce future charges?Short descriptionReview your AWS Cost and Usage reports to understand your CloudWatch charges. Look for charges for the following services.Note: Items in bold are similar to what you might see in your reports. In your reports, region represents the abbreviation for your AWS Regions.Custom metrics: MetricStorage region-CW:MetricMonitorUsageCloudWatch metric API calls:API Name region-CW:RequestsGetMetricData region-CW:GMD-Requests/MetricsCloudWatch alarms:Unknown region-CW:AlarmMonitorUsageUnknown region-CW:HighResAlarmMonitorUsageCloudWatch dashboards: DashboardHour DashboardsUsageHour(-Basic)CloudWatch Logs:PutLogEvents region-DataProcessing-BytesPutLogEvents region-VendedLog-BytesHourlyStorageMetering region-TimedStorage-ByteHrsCloudWatch Contributor Insights:Contributor Insights Rules: region-CW:ContributorInsightRulesContributor Insights matched log events: region-CW:ContributorInsightEventsCloudWatch Synthetics canary runs: region-CW:Canary-runsWhen you understand what you were charged for and why, use the following recommendations to reduce future costs by adjusting your CloudWatch configuration.To easily monitor your AWS costs in the future, enable billing alerts.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Detailed monitoringCharges are incurred by detailed CloudWatch monitoring for Amazon Elastic Compute Cloud (Amazon EC2) instances, Auto Scaling group launch configurations, or API gateways.To reduce costs, turn off detailed monitoring of instances, Auto Scaling group launch configurations, or API gateways, as appropriate.Custom metricsCharges are incurred by monitoring more than ten custom metrics. Custom metrics include those that you created as well as those used by tools such as the CloudWatch agent and application or OS data from EC2 instances.Request metrics for Amazon Simple Storage Service (Amazon S3) and Amazon Simple Email Service (Amazon SES) events sent to CloudWatch incur charges.PutMetricData calls for a custom metric can also incur charges.Amazon Kinesis Data Streams enhanced (shard-level) metrics and AWS Elastic Beanstalk enhanced health reporting metrics sent to CloudWatch incur charges.To reduce costs, turn off monitoring of custom metrics, as appropriate. To show custom metrics only, enter NOT AWS in Search for any metric, dimension or resource ID box of the CloudWatch console.CloudWatch metric API callsCharges vary by CloudWatch metric API. API calls that exceed the AWS Free Tier limit incur charges. GetMetricData and GetMetricWidgetImage aren't counted in the AWS Free Tier.Third-party monitoring tools can increase costs because they perform frequent API calls.To reduce costs:Make ListMetrics calls through the console for free rather than making them through the AWS CLI.Batch multiple PutMetricData requests into one API call. Also consider pre-aggregating metric data into a StatisticSet. Using these best practices reduces the API call volume and corresponding charges are reduced.In use cases involving a third-party monitoring tool, make sure that you are retrieving only metrics that are actively being monitored or that are being used by workloads. Reducing the retrieved metrics reduces the amount charged. You can also consider using metric streams as an alternative solution, and then evaluate which deployment is the most cost effective.or more information, see Should I use GetMetricData or GetMetricStatistics for CloudWatch metrics? Also be sure to review costs incurred by third-party monitoring tools.CloudWatch alarmsCharges are incurred by the number of metrics associated with a CloudWatch alarm. For example, if you have a single alarm with multiple metrics, you're charged for each metric.To reduce costs, remove unnecessary alarms.CloudWatch dashboardsCharges are incurred when you exceed three dashboards (with up to 50 metrics).Calls to dashboard-related APIs through the AWS CLI or an SDK also incur charges after requests exceed the AWS Free Tier limit.Exception: GetMetricWidgetImage always incurs charges.To reduce costs, delete unnecessary dashboards. If you're using the AWS Free Tier, keep your total number of dashboards to three or less. Also be sure to keep the total number of metrics across all dashboards to less than 50. Make dashboard-related API calls through the console for free rather than making them through the AWS CLI or an SDK.CloudWatch LogsCharges are incurred by ingestion, archival storage, and analysis of Amazon CloudWatch Logs.Ingestion charges reflect the volume of log data ingested by the CloudWatch Logs service. The CloudWatch metric IncomingBytes reports on the volume of log data processed by the service. By visualizing this metric in a CloudWatch graph or dashboard, you can monitor the volume of logs generated by various workloads. If high CloudWatch Logs ingestion charges occur, follow the guidance in Which Log Group is causing a sudden increase in my CloudWatch Logs bill?To reduce ingestion costs, you can re-evaluate logging levels and eliminate the ingestion of unnecessary logsArchival charges are related to the log storage costs over time. The retention policy determines how long CloudWatch Logs keeps the data. You can create a retention policy directing so that CloudWatch automatically deletes data older than the set retention period. This limits the data retained over time. The default retention policy on log groups is set to Never Expire. This setting means that CloudWatch retains data indefinitely. To reduce storage costs, consider changing the retention policy (for example, you can set the retention policy to keep data for 1 week, 1 month, and so on).Analysis charges occur when Log Insights is used to query logs. The charge is based on the volume of data scanned in order to provide query results. The Log Insights console provides a history of previously run queries. To reduce analysis charges, you can review the Log Insights query history and set queries to run over shorter timeframes. This reduces the amount of data scanned.CloudWatch Contributor InsightsCharges are incurred when you exceed one Contributor Insights rule per month, or more than 1 million log events match the rule per month.To reduce costs, view your Contributor Insights reports and remove any unnecessary rules.CloudWatch SyntheticsCharges are incurred when you exceed 100 canary runs per month using CloudWatch Synthetics.To reduce costs, delete any unnecessary canaries.Related informationAmazon CloudWatch pricingAWS services that publish CloudWatch metricsMonitoring metrics with Amazon CloudWatchHow can I determine why I was charged for EventBridge usage, and then how can I reduce future charges?Follow"
https://repost.aws/knowledge-center/cloudwatch-understand-and-reduce-charges
How do I troubleshoot a failed Spark step in Amazon EMR?
I want to troubleshoot a failed Apache Spark step in Amazon EMR.
"I want to troubleshoot a failed Apache Spark step in Amazon EMR.Short descriptionTo troubleshoot failed Spark steps:For Spark jobs submitted with --deploy-mode client: Check the step logs to identify the root cause of the step failure.For Spark jobs submitted with --deploy-mode cluster: Check the step logs to identify the application ID. Then, check the application master logs to identify the root cause of the step failure.ResolutionClient mode jobsWhen a Spark job is deployed in client mode, the step logs provide the job parameters and step error messages. These logs are archived to Amazon Simple Storage Service (Amazon S3). For example:s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/steps/s-2M809TD67U2IA/controller.gz: This file contains the spark-submit command. Check this log to see the parameters for the job.s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/steps/s-2M809TD67U2IA/stderr.gz: This file provides the driver logs. (When the Spark job runs in client mode, the Spark driver runs on the master node.)To find the root cause of the step failure, run the following commands to download the step logs to an Amazon Elastic Compute Cloud (Amazon EC2) instance. Then, search for warnings and errors:#Download the step logs:aws s3 sync s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/steps/s-2M809TD67U2IA/ s-2M809TD67U2IA/#Open the step log folder:cd s-2M809TD67U2IA/#Uncompress the log file:find . -type f -exec gunzip {} \;#Get the yarn application id from the cluster mode log:grep "Client: Application report for" * | tail -n 1#Get the errors and warnings from the client mode log:egrep "WARN|ERROR" *For example, this file:s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/container_1572839353552_0008_01_000001/stderr.gzindicates a memory problem:19/11/04 05:24:45 ERROR SparkContext: Error initializing SparkContext.java.lang.IllegalArgumentException: Executor memory 134217728 must be at least 471859200. Please increase executor memory using the --executor-memory option or spark.executor.memory in Spark configuration.Use the information in the logs to resolve the error.For example, to resolve the memory issue, submit a job with more executor memory:spark-submit --deploy-mode client --executor-memory 4g --class org.apache.spark.examples.SparkPi /usr/lib/spark/examples/jars/spark-examples.jarCluster mode jobs1.    Check the stderr step log to identify the ID of the application that's associated with the failed step. The step logs are archived to Amazon S3. For example, this log:s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/steps/s-2M809TD67U2IA/stderr.gzidentifies application_1572839353552_0008:19/11/04 05:24:42 INFO Client: Application report for application_1572839353552_0008 (state: ACCEPTED)2.    Identify the application master logs. When the Spark job runs in cluster mode, the Spark driver runs inside the application master. The application master is the first container that runs when the Spark job executes. The following is an example list of Spark application logs.In this list, container_1572839353552_0008_01_000001 is the first container, which means that it's the application master.s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/container_1572839353552_0008_01_000001/stderr.gzs3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/container_1572839353552_0008_01_000001/stdout.gzs3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/container_1572839353552_0008_01_000002/stderr.gzs3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/container_1572839353552_0008_01_000002/stdout.gz3.    After you identify the application master logs, download the logs to an Amazon EC2 instance. Then, search for warnings and errors. For example:#Download the Spark application logs:aws s3 sync s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/ application_1572839353552_0008/#Open the Spark application log folder:cd application_1572839353552_0008/ #Uncompress the log file:find . -type f -exec gunzip {} \;#Search for warning and errors inside all the container logs. Then, open the container logs returned in the output of this command.egrep -Ril "ERROR|WARN" . | xargs egrep "WARN|ERROR"For example, this log:s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/container_1572839353552_0008_01_000001/stderr.gzindicates a memory problem:19/11/04 05:24:45 ERROR SparkContext: Error initializing SparkContext.java.lang.IllegalArgumentException: Executor memory 134217728 must be at least 471859200. Please increase executor memory using the --executor-memory option or spark.executor.memory in Spark configuration.4.    Resolve the issue identified in the logs. For example, to fix the memory issue, submit a job with more executor memory:spark-submit --deploy-mode cluster --executor-memory 4g --class org.apache.spark.examples.SparkPi /usr/lib/spark/examples/jars/spark-examples.jar 1000Related informationAdding a Spark stepFollow"
https://repost.aws/knowledge-center/emr-spark-failed-step
How can I configure on-premises servers to use temporary credentials with SSM Agent and unified CloudWatch Agent?
I have a hybrid environment with on-premises servers that use AWS Systems Manager Agent (SSM Agent) and the unified Amazon CloudWatch Agent installed. How can I configure my on-premises servers to use only temporary credentials?
"I have a hybrid environment with on-premises servers that use AWS Systems Manager Agent (SSM Agent) and the unified Amazon CloudWatch Agent installed. How can I configure my on-premises servers to use only temporary credentials?ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.The unified CloudWatch Agent can be installed to on-premises hosts for improved performance monitoring. You can do this by specifying AWS Identity and Accesses Management (IAM) credentials that are written to a configuration file.However, some use cases might require the greater security of rotating credentials that aren’t saved to local files.In this more secure deployment scenario, the SSM Agent allows the on-premises host to assume an IAM role. Then the unified CloudWatch Agent can be configured to use this IAM role to publish metrics and logs to CloudWatch.To configure your on-premises servers to use only temporary credentials:1.    Integrate the on-premises host with AWS System Manager.2.    Attach the AWS managed IAM CloudWatchAgentServerPolicy to the IAM Service Role for a Hybrid Environment. Now the unified CloudWatch Agent has the permissions to post metrics and logs to CloudWatch.3.    Install or update the AWS CLI.4.    Confirm that the IAM Role is attached to the on-premises host:$ aws sts get-caller-identity{ "UserId": "AROAJXQ3RVCBOTUDZ2AWM:mi-070c8d5758243078f", "Account": "123456789012", "Arn": "arn:aws:sts::123456789012:assumed-role/SSMServiceRole/mi-070c8d5758243078f"}5.    Install the unified CloudWatch Agent.6.    Update the common-config.toml file to:Point to the credentials generated by SSM AgentSet a proxy configuration (if applicable)Note: These credentials are refreshed by the SSM Agent every 30 minutes.Linux:/opt/aws/amazon-cloudwatch-agent/etc/common-config.toml/etc/amazon/amazon-cloudwatch-agent/common-config.toml[credentials] shared_credential_profile = "default" shared_credential_file = "/root/.aws/credentials"Windows:$Env:ProgramData\Amazon\AmazonCloudWatchAgent\common-config.toml[credentials] shared_credential_profile = "default" shared_credential_file = "C:\\Windows\\System32\\config\\systemprofile\\.aws\\credentials"7.    Choose the AWS Region that the unified CloudWatch Agent metrics will post to.8.    Add the region in the credential file referenced by the SSM Agent in Step 5. This corresponds to the file associated with the shared_credential_file.$ cat /root/.aws/config [default]region = "eu-west-1"Note: Be sure to replace eu-west-1 with your target Region.9.    Depending on your host operating system, you might have to update permissions to allow the unified CloudWatch Agent to read the SSM Agent credentials file. Windows hosts run both agents as SYSTEM user and no further action is required.For Linux hosts, by default the unified CloudWatch Agent runs as the root user. The unified CloudWatch Agent can be configured to run as a non-privileged user with the run_as_user option. When using this option, you must grant the unified CloudWatch Agent access to the credentials file.10.    (Windows only) Change the Startup type of the unified CloudWatch Agent service to Automatic (Delayed). This starts the unified CloudWatch Agent service after the SSM Agent service during boot.Related informationSetting up AWS Systems Manager for hybrid environmentsDownload the CloudWatch agent on an on-premises serverInstall and configure the unified CloudWatch Agent to push metrics and logs from an EC2 instance to CloudWatchFollow"
https://repost.aws/knowledge-center/cloudwatch-on-premises-temp-credentials
Why can't I connect to my resources over a Transit Gateway peering connection?
"I have inter-Region AWS Transit Gateway peering set up between my source virtual private cloud (VPC) and remote VPC. However, I am unable to connect my VPC resources over the peering connection. How can I troubleshoot this?"
"I have inter-Region AWS Transit Gateway peering set up between my source virtual private cloud (VPC) and remote VPC. However, I am unable to connect my VPC resources over the peering connection. How can I troubleshoot this?ResolutionConfirm that the source and remote VPCs are attached to the correct transit gatewayUse the following steps at the source VPC and the remote VPC:Open the Amazon Virtual Private Cloud (Amazon VPC) console.From the navigation pane, choose Transit gateway attachments.Confirm that:The VPC attachments are associated with the correct Transit gateway ID that you used to set up peering.The source VPC and the transit gateway that it's attached to are in the same Region.The remote VPC and the transit gateway that it's attached to are in the the same Region.Find the transit gateway route table that the source and the remote VPC attachments are associated withOpen the Amazon VPC console and choose Transit gateway attachments.Select the VPC attachment.In the Associated route table ID column, note the transit gateway route table ID.Find the transit gateway route table that the source and the remote peering attachments are associated withOpen the Amazon VPC console and choose Transit gateway attachments.Select the Peering attachment.In the Associated route table ID column, note the value transit gateway route table ID.Confirm that source VPC attachment associated with a transit gateway has a static route for remote VPC that points to the transit gateway peering attachmentOpen the Amazon VPC console and choose Transit gateway route tables.Select the Route table. This is the value that you noted in the section Find the transit gateway route table that the source and the remote VPC attachments are associated withChoose the Routes tab.Verify the routes for the remote VPC CIDR block that point to the transit gateway peering attachment.Confirm that remote VPC attachment associated with a transit gateway route table has a static route for source VPC that points to the transit gateway peering attachmentOpen the Amazon VPC console and choose Transit gateway route tables.Select the Route table. This is the value that you noted in the section Find the transit gateway route table that the source and the remote VPC attachments are associated with.Choose the Routes tab.Verify the routes for the source VPC CIDR block that point to the transit gateway peering attachment.Note: To route traffic between the peered transit gateways, add a static route to the transit gateway route table that points to the transit gateway peering attachment.Confirm that the source peering attachment associated transit gateway route table has a route for the source VPC that points to the source VPC attachmentOpen the Amazon VPC console and choose Transit gateway route tables.Select the route table. This is the value that you noted in the section Find the transit gateway route table that the source and the remote peering attachments are associated with.Choose the Routes tab.Verify the routes for the source VPC CIDR block pointing to source VPC attachment.Confirm that the remote peering attachment associated transit gateway route table has a route for the remote VPC that points to the remote VPC attachmentOpen the Amazon VPC console and choose Transit gateway route tables.Select the route table. This is the value that you noted in the section Find the transit gateway route table that the source and the remote peering attachments are associated with.Choose the Routes tab.Verify that there are routes for the remote VPC CIDR block pointing to the remote VPC attachment.Confirm that the routes for the source and remote VPCs are in the VPC subnet route table with the gateway set to Transit GatewayOpen the Amazon VPC console.From the navigation pane, choose Route tables.Select the route table used by the instance.Choose the Routes tab.Under Destination, verify that there's a route for the source/remote VPC CIDR block. Then, verify that Target is set to the Transit Gateway ID.Confirm that source and remote Amazon EC2 instance's security group and network open control list (ACL) allows trafficOpen the Amazon EC2 console.From the navigation pane, choose Instances.Select the instance where you're performing the connectivity test.Choose the Security tab.Verify that the Inbound rules and Outbound rules allow traffic.Open the Amazon VPC console.From the navigation pane, choose Network ACLs.Select the network ACL that's associated with the subnet where your instance is located.Select the Inbound rules and Outbound rules. Verify that the rules allows the traffic needed by your use-case.Confirm that the network ACL associated with the transit gateway network interface allows trafficOpen the Amazon EC2 console.From the navigation pane, choose Network Interfaces.In the search bar, enter Transit gateway. The results show that all network interfaces of the transit gateway appear.Note the Subnet ID that's associated with the location where the transit gateway interfaces were created.Open the Amazon VPC console.From the navigation pane, choose Network ACLs.In the Filter network ACLS search bar, enter the subnet ID that you noted in step 3. This shows the network ACL associated with the subnet.Confirm the Inbound rules and Outbound rules of the network ACL allow traffic to or from the source or remote VPC.Follow"
https://repost.aws/knowledge-center/transit-gateway-peering-connection
How do I troubleshoot connection issues between my Fargate task and other AWS services?
I want to troubleshoot connectivity issues I am having between my AWS Fargate task and an AWS service.
"I want to troubleshoot connectivity issues I am having between my AWS Fargate task and an AWS service.Short descriptionApplications that run inside a Fargate task with Amazon Elastic Container Service (Amazon ECS) can fail to access other AWS services due to the following reasons:Insufficient AWS Identity and Access Management (IAM) permissionsIncorrect subnet routesNetwork access control list (network ACL) restrictionsSecurity groupsAmazon Virtual Private Cloud (Amazon VPC) endpointsTo resolve these issues, use Amazon ECS Exec to interact with the application container of the Fargate task. If you observe connection timeout errors in the application container logs, then test the connectivity between the Fargate task and the corresponding AWS service.ResolutionUse ECS Exec to interact with the application container of the Fargate task1.    Before using Amazon ECS exec, complete the prerequisites of using Amazon ECS Exec.2.    Follow the instructions in Using Amazon ECS Exec to turn on the feature.3.    Run Amazon ECS Exec to access your application container and check the network and IAM connectivity between the container and AWS service.Note: Before performing Exec, it's a best practice to set the parameter initProcessEnabled to true. This keeps AWS Systems Manager Agent (SSM Agent) child processes from becoming orphaned. (Optional) Add a sleep command for the application container to keep the container running for a specified time period.Example:{ "taskRoleArn": "ecsTaskRole", "networkMode": "awsvpc", "requiresCompatibilities": [ "EC2", "FARGATE" ], "executionRoleArn": "ecsTaskExecutionRole", "memory": ".5 gb", "cpu": ".25 vcpu", "containerDefinitions": [ { "name": "application", "image": "application:latest", "essential": true, "command": ["sleep","7200"], "linuxParameters": { "initProcessEnabled": true } } ], "family": "ecs-exec-task"}If you can't use Exec to access your application container, then run Exec for a new Fargate task that runs on the amazon/aws-cli Docker image. This lets you test the communication between the Fargate task and the AWS service.Note: The new Fargate task must have the same networking setup (subnets, security groups, and so on) as your application container.To run a new Fargate task with the amazon/aws-cli Docker image, complete the following steps:Note: AWS Command Line Interface (AWS CLI) is preinstalled on the amazon/aws-cli image of your container. If AWS CLI isn't installed on your application container, then run the following command:curl "https://awscli.amazonaws.com/awscli-exe-linux-x86\_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install1.    Create a task definition with amazon/aws-cli as the image for the container. Then, add the entry points tail, -f, and /dev/null to put the container in a continuous Running state.Example task definition:{ "requiresCompatibilities": \[ "FARGATE" \], "family": "aws-cli", "containerDefinitions": \[ { "entryPoint": \[ "tail", "-f", "/dev/null" \], "name": "cli", "image": "amazon/aws-cli", "essential": true } \], "networkMode": "awsvpc", "memory": "512", "cpu": "256", "executionRoleArn": "arn:aws:iam::123456789012:role/EcsTaskExecutionRole", "taskRoleArn": "arn:aws:iam::123456789012:role/TaskRole" }2.    Create an Amazon ECS service with the newly created task definition and with the same network configuration as the application container:$ aws ecs create-service --cluster <example-cluster-name> --task-definition <example-task-definition-name> --network-configuration awsvpcConfiguration="{subnets=[example-subnet-XXXXXXX, example-subnet-XXXXXXX],securityGroups=[example-sg-XXXXXXXXXXXX],assignPublicIp=ENABLED}" --enable-execute-command --service-name <example-service-name> --desired-count 1 --launch-type FARGATE --region <example-region>Note: Replace example-cluster-name with your cluster name, example-task-definition-name with your task definition name, example-service-name with your service name, and example-region with your AWS Region.3.    Run Exec to access the Amazon ECS Fargate task container, and run the /bin/sh command against your specified container-name and task-id:$ aws ecs execute-command --cluster <example-cluster-name> --task <example-task-id> --container <example-container-name> --interactive --command "/bin/sh" --region <example-region>Note: Replace example-cluster-name with your cluster name, example-task-id with your task ID, example-container-name with your container name, and example-region with your Region.If you still have issues using ECS Exec on your Fargate task, then see REFER TO ISHAN'S ARTICLE HERE (Awaiting URL for article_33538).Test the connectivity between a Fargate task and the corresponding AWS serviceTroubleshoot insufficient IAM permissionsCheck whether the Fargate task has sufficient IAM permissions to connect to the corresponding AWS service. To run AWS CLI commands for the required AWS service, see the AWS CLI command Reference Guide.Example connectivity test between the Fargate task and Amazon Simple Notification Service (Amazon SNS):# aws sns list-topics --region <example-region-name>If you receive the following error, then check the Amazon VPC endpoint policy. Make sure that the policy allows access to perform the necessary actions against the AWS service.An error occurred (AuthorizationError) when calling the ListTopics operation: User: arn:aws:sts::123456789012:assumed-role/TaskRole/123456789012 is not authorized to perform: SNS:ListTopics on resource: arn:aws:sns:<region-name>:123456789012:* with an explicit deny in a VPC endpoint policyIf you receive the following error, then check the permissions of the Amazon ECS task IAM role. Make sure that the IAM role has the required permissions to perform the required actions on the AWS service.An error occurred (AuthorizationError) when calling the ListTopics operation: User: arn:aws:sts::123456789012:assumed-role/TaskRole/123456789012 is not authorized to perform: SNS:ListTopics on resource: arn:aws:sns:<region-name>:123456789012:* because no identity-based policy allows the SNS:ListTopics actionNote: If you don't see any error when running AWS CLI commands on the Fargate task, then the required IAM permissions are present for that AWS service.Troubleshoot connection timeout errors1.    Use # telnet to test the network connectivity to your AWS service endpoints from the Fargate task:# telnet <EXAMPLE-ENDPOINT> <EXAMPLE-PORT>Note: Replace EXAMPLE-ENDPOINT with your AWS service endpoint name and URL and EXAMPLE-PORT with your AWS service port.The following example output shows that the endpoint is accessible from the container:Trying 10.0.1.169...Connected to sns.us-east-1.amazonaws.com.Escape character is '^]'.# dig <EXAMPLE-ENDPOINT># nslookup <EXAMPLE-ENDPOINT>For a list of Regional AWS service endpoints, see Service endpoints and quotas for AWS services.Note: If you didn't install telnet and dig in the application container, then run the apt-get update, apt install dnsutils, and apt install telnet commands to install them. For containers based on amazon/aws-cli, use the yum update, yum install telnet, and yum install bind-utils commands to install telnet and other tools.2.    If you receive Connection timed out errors after testing the network connectivity to your AWS service endpoints, then inspect the network configuration:Run the nslookup command. If you see VPC CIDR IP ranges, then traffic is routing through VPC endpoints:# nslookup sns.us-east-1.amazonaws.comNon-authoritative answer:Name: sns.us-east-1.amazonaws.comAddress: 10.0.1.169Name: sns.us-east-1.amazonaws.comAddress: 10.0.2.248For Connection timed out errors, check the inbound rules of the VPC endpoint security group. Make sure that TCP traffic over port 443 is allowed in the inbound rules from the ECS security group or VPC CIDR. For more information, see How can I troubleshoot connectivity issues over my gateway and interface VPC endpoints?If no Amazon VPC endpoints are configured in the Region, then check the routes from your subnets to the internet. For a Fargate task in a public subnet, make sure that your task has a default route to the internet gateway. For a Fargate task in a private subnet, make sure that your task has a default route. Your task needs a default route to the NAT gateway, AWS PrivateLink, another source of internet connectivity, or to local and VPC CIDR.Make sure that the network ACL allows access to the AWS service.Check that the inbound rules of the security group are attached to the AWS service that you're trying to access with your Fargate task. Allow the ingress traffic over the required ports.Check that the outbound rules of the Fargate task security group allows egress traffic over the required ports to connect to the AWS service.Follow"
https://repost.aws/knowledge-center/fargate-connection-issues
How do I resolve the "DockerTimeoutError" error in AWS Batch?
The jobs in my AWS Batch compute environment are failing and are returning the following error: "DockerTimeoutError: Could not transition to created; timed out after waiting 4m0s." How do I troubleshoot "DockerTimeoutError" errors in AWS Batch?
"The jobs in my AWS Batch compute environment are failing and are returning the following error: "DockerTimeoutError: Could not transition to created; timed out after waiting 4m0s." How do I troubleshoot "DockerTimeoutError" errors in AWS Batch?Short descriptionIf your docker start and docker create API calls take longer than four minutes, then AWS Batch returns a DockerTimeoutError error.Note: The default timeout limit set by the Amazon Elastic Container Service (Amazon ECS) container agent is four minutes.The error can occur for a variety of reasons, but it's commonly caused by one of the following:The ECS instance volumes of the AWS Batch compute environment are under high I/O pressure from all the other jobs in your queue. These jobs, which are created on and run on the ECS instance, can deplete the burst balance. To resolve this issue, follow the steps in the Resolve any burst balance issues section of this article.Stopped ECS containers aren't being cleaned fast enough to free up the Docker daemon. You can experience Docker issues if you're using a customized Amazon Machine Image (AMI) instead of the default AMI provided by AWS Batch. The default AMI for AWS Batch optimizes your Amazon ECS cleanup settings. To resolve this issue, follow the steps in the Resolve any Docker issues section of this article.If neither of these issues is causing the error, then you can further troubleshoot the issue by doing the following:Check your Docker logs to identify the source of the error.Run the Amazon ECS logs collector script on the ECS instances in the ECS cluster associated with your AWS Batch compute environment.ResolutionResolve any burst balance issuesCheck the burst balance of your ECS instance1.    Open the Amazon ECS console.2.    In the navigation pane, choose Clusters. Then, choose the cluster that contains your job.Note: The name of the cluster starts with the name of the compute environment, followed by _Batch_ and a random hash of numbers and letters.3.    Choose the ECS Instances tab.4.    From the EC2 Instance column, choose your instance.Note: To find the failed job's instance ID, run the AWS Batch describe-jobs command. The instance ID appears in the output for containerInstanceArn.5.    On the Descriptions tab in the Amazon EC2 console, under Block devices, choose the link for your volume.6.    On the block device pop-up window, for EBS ID, choose your volume.7.    Choose the Monitoring tab. Then, choose Burst Balance to check your burst balance metrics. If your burst balance drops to 0, then your burst balance is depleted.Create a launch template for your managed compute environmentNote: If you change the launch template, you must create a new compute environment.1.    Open the Amazon EC2 console, and then choose Launch Templates.2.    Choose Create launch template.3.    For AMI ID, select the default Amazon ECS optimized AMI.4.    In the Storage (Volumes) section, choose a volume type in the Volume type column. Then, enter an integer value in the Size(GiB) column.Note: If you choose Provisioned IOPS SSD (io1) for your volume type, enter an integer value that's permitted for IOPS.5.    Choose Create launch template.6.    Use your new launch template to create a new managed compute environment.Create an AWS Batch compute environment with your AMINote: If you change the AMI, you must create a new compute environment because the AMI ID parameter can't be updated.1.    Open the Amazon EC2 console.2.    Choose Launch instance.3.    Follow the steps in the setup wizard to create your instance.Important: On the Add Storage page, modify the volume type or size of your instance. The larger the volume size, the greater the baseline performance is and the slower it replenishes the burst balance. To get better performance for high I/O loads, change the volume to type io1.4.    Create a compute resource AMI from your instance.5.    Create a compute environment for AWS Batch that includes your AMI ID.Resolve any Docker issuesBy default, the Amazon ECS container agent automatically cleans up stopped tasks and Docker images that your container instances aren't using. If you run new jobs with new images, then your container storage might fill up with Docker images you aren't using.1.    Use SSH to connect to the container instance for your AWS Batch compute environment.2.    To inspect the Amazon ECS container agent, run the Docker inspect ecs-agent command. Then, review the env section in the output.Note: You can reduce the values of the following variables to speed up task and image cleanup:ECS_ENGINE_TASK_CLEANUP_WAIT_DURATIONECS_IMAGE_CLEANUP_INTERVALECS_IMAGE_MINIMUM_CLEANUP_AGEECS_NUM_IMAGES_DELETE_PER_CYCLEYou can also use tunable parameters for automated task and image cleanup.3.    Create a new AMI with updated values.-or-Create a launch template with the user data that includes your new environment variables.To create a new AMI with updated values1.    Set your agent configuration parameters in the /etc/ecs/ecs.config file.2.    Restart your container agent.3.    Create a compute resource AMI from your instance.4.    Create compute environment for AWS Batch that includes your AMI ID.To create a launch template with the user data that includes your new environment variables1.    Create a launch template with user data.For example, the user data in the following MIME multi-part file overrides the default Docker image cleanup settings for a compute resource:MIME-Version: 1.0Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="--==MYBOUNDARY==Content-Type: text/x-shellscript; charset="us-ascii"#!/bin/bashecho ECS_IMAGE_CLEANUP_INTERVAL=60m >> /etc/ecs/ecs.configecho ECS_IMAGE_MINIMUM_CLEANUP_AGE=60m >> /etc/ecs/ecs.config--==MYBOUNDARY==--2.    Use your new launch template to create a managed compute environment.Related informationAWS services that publish CloudWatch metricsCompute resource AMIsamazon-ecs-agent (AWS GitHub)Follow"
https://repost.aws/knowledge-center/batch-docker-timeout-error
How can I troubleshoot the "Waiting for the slave SQL thread to free enough relay log space" error in Amazon Aurora MySQL?
I received the following error in the output of the SHOW SLAVE STATUS command that is working as a replica of binary log replication in Amazon Aurora MySQL:"Waiting for the slave SQL thread to free enough relay log space"How can I troubleshoot and resolve this error?
"I received the following error in the output of the SHOW SLAVE STATUS command that is working as a replica of binary log replication in Amazon Aurora MySQL:"Waiting for the slave SQL thread to free enough relay log space"How can I troubleshoot and resolve this error?Short descriptionWhen Aurora MySQL is a replica of binary log replication, it runs the I/O thread and the SQL thread in the same way as MySQL. The I/O thread reads binary logs from the primary, and then saves them as relay logs in the replica DB instance. The SQL thread processes the events in the relay logs, and then deletes the relay logs when the events in the relay logs are processed.If the SQL thread doesn't process events fast enough to catch up with the speed that the relay logs are being generated at, the amount of relay logs increase.When the global variable relay_log_space_limit is set to larger than 0 and the total size of all relay logs reach the limit, new relay logs aren't saved. Until the relay log space becomes available again, the output of SHOW SLAVE STATUS shows the message "Waiting for the slave SQL thread to free enough relay log space" in the Slave_IO_State field.In Aurora MySQL, the relay_log_space_limit is set to 1000000000 (953.6 MiB) and can't be modified. This prevents the cluster volume from becoming unnecessary large. When the total size of all relay logs reaches 1000000000 bytes (953.6 MiB), the I/O thread stops saving relay logs. It waits for the SQL thread to process events and delete the existing logs. Slave_IO_State then shows the message "Waiting for the slave SQL thread to free enough relay log space". If the SQL thread isn't stopped, the relay logs are eventually deleted, and the I/O thread resumes saving new relay logs.This also means that replication lag exists because the SQL isn't fast enough to catch up with the generation of relay logs by the I/O thread. Even if relay_log_space_limit is modified to a larger value, the relay logs still further accumulate, and the issue isn't resolved until the SQL thread catches up.You can view the current relay log space, the status of the I/O thread, and the status of the SQL thread in the output of the SHOW SLAVE STATUS command.Slave_IO_State: Waiting for the slave SQL thread to free enough relay log spaceMaster_Log_File: mysql-bin-changelog.237029Read_Master_Log_Pos: 55356151Relay_Master_Log_File: mysql-bin-changelog.237023Exec_Master_Log_Pos: 120Relay_Log_Space: 1000002403Master_Log_File and Read_Master_Log_Pos show the binary log file name and the position where the I/O thread completed reading and saving. Relay_Master_Log_File and Exec_Master_Log_Pos show the binary log file name and the position where the SQL thread is processing. Although what the SQL thread actually reads are relay logs, the corresponding binary log file name in the primary DB instance and the position is displayed.When Master_Log_File is different from Relay_Master_Log_File, the SQL thread isn't fast enough. If Master_Log_File and Relay_Master_Log_File are the same, the I/O thread might be contributing to the lag.The following factors can cause insufficient performance of the SQL thread:Long-running queries on the primary DB instanceInsufficient DB instance class size or storageParallel queries performed on the primary DB instanceBinary logs synced to the disk on the replica DB instanceBinlog_format on the replica DB instance is set to ROWFor more information on resolving these issues, see How can I troubleshoot high replica lag with Amazon RDS for MySQLAdditionally, the following factors can also impact the performance of the SQL thread:A very large Transaction History List Length (HLL) on the replica DB instanceLess-than-efficient I/O operations on the replica DB instanceTables with lots of secondary indexes on the replica DB instanceResolutionAs long as there are writes happening in your replica, you don't need to worry about relay log space. You can monitor this using the Write Throughput metric in Enhanced Monitoring.Instead, focus on troubleshooting the replica's performance. For more details, see How can I troubleshoot high replica lag with Amazon RDS for MySQL and Why did my Amazon Aurora read replica fall behind and restart?Related informationMySQL documentation for Replica server options and variablesFollow"
https://repost.aws/knowledge-center/aurora-mysql-slave-sql-thread-error
How do I test if my delegated subdomain resolves correctly?
"I want to test and confirm if my delegated subdomain resolves correctly. If the subdomain doesn't resolve correctly, then I want to troubleshoot it."
"I want to test and confirm if my delegated subdomain resolves correctly. If the subdomain doesn't resolve correctly, then I want to troubleshoot it.Short descriptionYou can configure a parent zone for your apex domain (such as example.com) using Amazon Route 53 or a third-party DNS provider. You can also use your DNS provider to set up a delegation set for the subdomain (such as www.example.com).If you use a separate delegation set for your subdomain, then you can have the following configurations:An apex domain and a subdomain that both use Route 53An apex domain that uses a third-party DNS service and a subdomain that uses Route 53An apex domain that uses Route 53 and a subdomain that's delegated to a third-party DNS serviceTo verify if your subdomain resolves correctly and troubleshoot as needed, complete the following steps depending on your DNS provider and configuration.ResolutionNote: The following commands apply only to Amazon Elastic Compute Cloud (Amazon EC2) Linux instances. If you use Amazon EC2 for Windows, you can use third-party web tools such as DiG GUI and Dig web interface for troubleshooting.An apex domain and a subdomain that both use Route 531.    To check that your subdomain resolves correctly, run the dig command:dig RECORD_TYPE DESIRED_SUBDOMAIN_RECORDNote: Replace RECORD_TYPE and DESIRED_SUBDOMAIN_RECORD with your relevant details.2.    In the output, verify that you have a record type of your choice under your subdomain’s hosted zone. In the following example output, there's an A record for www.example.com under the subdomain:$ dig www.example.com;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48170;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0;; QUESTION SECTION:;www.example.com. IN A;; ANSWER SECTION:www.example.com. 60 IN A 127.0.0.13.    If the lookup is successful against other DNS servers, then your local resolver might have caching issues. To bypass your local resolver, run the dig @ command with another resolver and your domain name. For example, the following lookup uses Google’s public resolver:dig @8.8.8.8 www.example.comPerform the lookup directly against one of the authoritative AWS name servers for the apex domain’s hosted zone:dig @ns-***.awsdns-**.com www.example.com4.    If the DNS lookup fails, then use the dig +trace command:dig +trace www.example.comThen, review the output to identify where the lookup fails along the DNS chain. See the following example output for a dig +trace command:$dig +trace www.example.com; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.62.rc1.56.amzn1 <<>> +trace www.example.com;; global options: +cmd. 518400 IN NS G.ROOT-SERVERS.NET....... 518400 IN NS F.ROOT-SERVERS.NET.;; Received 228 bytes from 169.xxx.xxx.xxx#53(169.xxx.xxx.xxx) in 21 mscom. 172800 IN NS c.gtld-servers.net......com. 172800 IN NS i.gtld-servers.net.;; Received 498 bytes from 199.xxx.xxx.xxx #53(199.xxx.xxx.xxx) in 198 ms.example.com. 172800 IN NS ns-xxx.awsdns-xx.com..example.com. 172800 IN NS ns-xxx.awsdns-xx.net..example.com. 172800 IN NS ns-xxx.awsdns-xx.co.uk..example.com. 172800 IN NS ns-xxx.awsdns-xx.org.;; Received 207 bytes from 192.xxx.xxx.xxx #53(192.xxx.xxx.xxx) in 498 mswww.example.com. 172800 IN NS ns-xxx.awsdns-xx.com.www.example.com. 172800 IN NS ns-xxx.awsdns-xx.net.www.example.com. 172800 IN NS ns-xxx.awsdns-xx.co.uk.www.example.com. 172800 IN NS ns-xxx.awsdns-xx.org.;; Received 175 bytes from 205.xxx.xxx.xxx #53(205.xxx.xxx.xxx) in 345 mswww.example.com. 900 IN SOA ns-xxx.awsdns-xx.com. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400$ dig www.example.com.com;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22072;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0;; QUESTION SECTION:www.example.com.com. IN A;; AUTHORITY SECTION:www.example.com.com. 60 IN SOA ns-xxx.awsdns-xx.com. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 864005.    Depending on the information in your output, follow the relevant troubleshoot steps.dig returns a NOERROR status with no ANSWER section, and the dig +trace output only includes the apex domain’s name serversThe NS record for your delegated subdomain is missing from the hosted zoned of your apex domain. Also, the record type for the subdomain under the root domain is wrong. For example, an MX record is listed instead of an A record.To fix this issue, create an NS record under your apex domain’s hosted zone for your subdomain with the correct name servers. Also, remove the non-NS record for the subdomain under the apex domain’s hosted zone. Then, place the non-NS record under the subdomain’s hosted zone.dig returns NXDOMAIN status and dig +trace output only includes apex domain’s name serversThe NS record for your delegated subdomain is missing under your apex domain’s hosted zone.To fix this issue, create an NS record under your apex domain’s hosted zone with the correct name servers.dig +trace returns your name servers for the delegated subdomain, but dig returns NOERROR status with no ANSWER sectionThe hosted zone contains a record that's the wrong type for your delegated subdomain. For example, a TXT record exists instead of an A record for your subdomain under the subdomain’s hosted zone.To fix this issue, create a new A record for the delegated subdomain under the subdomain’s hosted zone.dig +trace returns name servers for the delegated subdomain, but dig returns the SERVFAIL statusThe Route 53 name servers for your delegated subdomain under your apex domain’s hosted zone are incorrect in the NS record. Confirm this problem with a DNS lookup against one of the delegated subdomain’s name servers using the dig @ command. An incorrect name server returns a REFUSED status.To fix this issue, modify the NS record with the correct name servers for your subdomain’s hosted zone.An apex domain that uses a third-party DNS service and a subdomain that uses Route 531.    Check if the name servers for the subdomain are properly configured in the parent zone.2.    If the name servers aren't properly configured, then look up the NS records for your subdomain’s hosted zone. Then, add those records to the apex domain’s hosted zone or zone file with the third-party DNS provider.3.    To verify that the subdomain resolves correctly, use the dig @ command with one of the subdomain's hosted zone name servers:dig @ns-***.awsdns-**.com www.example.comNote: If the DNS resolution fails, then follow the methods in step 4 of An apex domain and a subdomain that both use Route 53.An apex domain that uses Route 53 and a subdomain that is delegated to a third party1.    Check if the name servers for the subdomain are properly configured in the parent zone.2.    If the name servers aren't properly configured, then add NS records under the hosted zone for your apex domain in Route 53.3.    To verify that the subdomain resolves correctly, use the dig @ command with your third-party DNS service's authoritative name server:dig @THIRD_PARTY_NAME_SERVER www.example.comNote: If the DNS resolution fails, then follow the methods in step 4 of An apex domain and a subdomain that both use Route 53.Related informationHow do I create a subdomain for my domain that's hosted in Route 53?Follow"
https://repost.aws/knowledge-center/delegated-subdomain-resolve
Why can't I copy an object between two Amazon S3 buckets?
"I'm trying to copy an object from one Amazon Simple Storage Service (Amazon S3) bucket to another, but it's not working. How can I troubleshoot this?"
"I'm trying to copy an object from one Amazon Simple Storage Service (Amazon S3) bucket to another, but it's not working. How can I troubleshoot this?Short descriptionTo troubleshoot issues with copying an object between buckets, check the following:Bucket policies and AWS Identity and Access Management (IAM) policiesObject ownershipAWS Key Management Service (AWS KMS) encryptionAmazon Simple Storage Service Glacier (Amazon S3 Glacier) storage classRequester Pays enabled on the bucketAWS Organizations service control policyCross-Region request issues with Amazon Virtual Private Cloud (VPC) endpoints for Amazon S3ResolutionBucket policies and IAM policiesTo copy an object between buckets, you must make sure that the correct permissions are configured. To copy an object between buckets in the same AWS account, you can set permissions using IAM policies. To copy an object between buckets in different accounts, you must set permissions on both the relevant IAM policies and bucket policies.Note: For instructions on how to modify a bucket policy, see How do I add an S3 bucket policy? For instructions on how to modify the permissions for an IAM user, see Changing permissions for an IAM user. For instructions on how to modify the permissions for an IAM role, see Modifying a role.Confirm the following required permissions:At minimum, your IAM identity (user or role) must have permissions to the s3:ListBucket and s3:GetObject actions on the source bucket. If the buckets are in the same account, then set these permissions using your IAM identity's policies or the S3 bucket policy. If the buckets are in different accounts, then set these permissions using both the bucket policy and your IAM identity's policies.At minimum, your IAM identity must have permissions to the s3:ListBucket and s3:PutObject actions on the destination bucket. If the buckets are in the same account, then set these permissions using your IAM identity's policies or the S3 bucket policy. If the buckets are in different accounts, then set these permissions using both the bucket policy and your IAM identity's policies.Review the relevant bucket policies and IAM policies to confirm that there are no explicit deny statements conflicting with the permissions that you need. An explicit deny statement overrides an allow statement.For specific operations, confirm that your IAM identity has permissions to all the necessary actions within the operation. For example, to run the command aws s3 cp, you need permission to s3:GetObject and s3:PutObject. To run the command aws s3 cp with the --recursive option, you need permission to s3:GetObject, s3:PutObject, and s3:ListBucket. To run the command aws s3 sync, then you need permission to s3:GetObject, s3:PutObject, and s3:ListBucket.Note: If you're using the AssumeRole API operation to access Amazon S3, you must also verify that the trust relationship is configured correctly.For version-specific operations, confirm that your IAM identity has permissions to version-specific actions. For example, to copy a specific version of an object, you need the permission for s3:GetObjectVersion in addition to s3:GetObject.If you're copying objects that have object tags, then your IAM identity must have s3:GetObjectTagging and s3:PutObjectTagging permissions. You must have s3:GetObjectTagging permission for the source object and s3:PutObjectTagging permission for objects in the destination bucket.Review the relevant bucket policies and IAM policies to be sure that the Resource element has the correct path. For bucket-level permissions, the Resource element must point to a bucket. For object-level permissions, the Resource element must point to an object or objects.For example, a policy statement for a bucket-level action such as s3:ListBucket must specify a bucket in the Resource element, like this:"Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET"A policy statement for object-level actions like s3:GetObject or s3:PutObject must specify an object or objects in the Resource element, similar to the following:"Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"Object ownershipIf the bucket policies have the correct permissions and you're still having problems copying an object between buckets, check which account owns the object. The bucket policy applies only to objects owned by the bucket owner. An object that's owned by a different account might have conflicting permissions on its access control list (ACL).Note: The object ownership and ACL issue typically occurs when you copy AWS service logs across accounts. Examples of service logs include AWS CloudTrail logs and Elastic Load Balancing access logs.Follow these steps to find the account that owns an object:1.    Open the Amazon S3 console.2.    Navigate to the object that you can't copy between buckets.3.    Choose the object's Permissions tab.4.    Review the values under Access for object owner and Access for other AWS accounts:If the object is owned by your account, then the Canonical ID under Access for object owner contains (Your AWS account).If the object is owned by another account and you can access the object, then the following is true:The Canonical ID under Access for object owner contains (External account).The Canonical ID under Access for other AWS accounts contains (Your AWS account).If the object is owned by another account and you can't access the object, then the following is true:Canonical ID fields for both Access for object owner and Access for other AWS accounts are empty.If the object that you can't copy between buckets is owned by another account, then the object owner can do one of the following:The object owner can grant the bucket owner full control of the object. After the bucket owner owns the object, the bucket policy applies to the object.The object owner can keep ownership of the object, but they must change the ACL to the settings that you need for your use case.AWS KMS encryptionIf the object is encrypted using an AWS KMS key, then confirm that your IAM identity has the correct permissions to the key. If your IAM identity and AWS KMS key belong to the same account, then confirm that your key policy grants the required AWS KMS permissions.If your IAM identity and AWS KMS key belong to different accounts, then confirm that both the key and IAM policies grant the required permissions.For example, if you copy objects between two buckets (and each bucket has its own KMS key), then the IAM identity must specify the following:kms:Decrypt permissions, referencing the first KMS keykms:GenerateDataKey and kms:Decrypt permissions, referencing the second KMS keyFor more information, see Using key policies in AWS KMS and Actions, resources, and condition keys for AWS Key Management Service.Amazon S3 Glacier storage classYou can't copy an object from the Amazon S3 Glacier storage class. You must first restore the object from Amazon S3 Glacier before you can copy the object. For instructions, see How do I restore an S3 object that has been archived?Requester Pays enabled on bucketIf the source or destination bucket has Requester Pays enabled, and you're accessing the bucket from another account, check your request. Make sure that your request includes the correct Requester Pays parameter:For AWS Command Line Interface (AWS CLI) commands, include the --request-payer option.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.For GET, HEAD, and POST requests, include x-amz-request-payer : requester.For signed URLs, include x-amz-request-payer=requester.AWS Organizations service control policyIf you're using AWS Organizations, check the service control policies to be sure that access to Amazon S3 is allowed.For example, the following policy results in a 403 Forbidden error when you try to access Amazon S3 because it explicitly denies access:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Action": "S3:*", "Resource": "*" } ]}For more information on the features of AWS Organizations, see Enabling all features in your organization.Cross-Region request issues with VPC endpoints for Amazon S3VPC endpoints for Amazon S3 currently don't support cross-Region requests. For example, suppose you have an Amazon Elastic Compute Cloud (Amazon EC2) instance in Region A with a VPC endpoint configured in its associated route table. Additionally, the Amazon EC2 instance can't copy an object from Region B to a bucket in Region A. Instead, you receive an error message similar to the following:"An error occurred (AccessDenied) when calling the CopyObject operation: VPC endpoints do not support cross-region requests"To troubleshoot this cross-Region request issue, you can try the following:Remove the VPC endpoint from the route table. If you remove the VPC endpoint, the instance must be able to connect to the internet instead.Run the copy command from another instance that's not using the VPC endpoint. Or, run the copy command from an instance that's in neither Region A nor Region B.If you must use the VPC endpoint, send a GET request to copy the object from the source bucket to the EC2 instance. Then, send a PUT request to copy the object from the EC2 instance to the destination bucket.Related informationCopying objectsHow do I troubleshoot 403 Access Denied errors from Amazon S3?Follow"
https://repost.aws/knowledge-center/s3-troubleshoot-copy-between-buckets
How do I utilize CDK escape hatches to retrieve lower-level construct objects from L3 and L2 constructs?
I want to use AWS Cloud Development Kit (AWS CDK) escape hatches to retrieve child objects of L2 and L3 constructs.
"I want to use AWS Cloud Development Kit (AWS CDK) escape hatches to retrieve child objects of L2 and L3 constructs.Short descriptionThere are three AWS CDK abstraction layers:L1 constructs have 1:1 relationships that map to the related AWS CloudFormation resource types. This is the most fundamental construct layer for AWS CDK.L2 constructs can wrap a number of L1 constructs and its default child object is the relevant resource type's L1 construct. Other L1 construct child objects are synthesized into AWS CloudFormation templates based on the L2 child object's specified properties.L3 constructs are the highest level of AWS CDK abstraction layers and can wrap a number of L2 and L1 constructs.For more information, see abstractions and escape hatches.ResolutionUse AWS CDK escape hatches to retrieve child objects from an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with an L3 construct.Note: These steps use the Python programming language. The steps are similar for any other programming languages. Make sure to adjust code syntax for the programming language you're using.An example Amazon EKS cluster with a L3 construct in Python:vpc = ec2.Vpc(self, "Vpc", ip_addresses=ec2.IpAddresses.cidr("192.168.0.0/25") ) eks_object = eks.Cluster(self, "HelloEKS", version=eks.KubernetesVersion.V1_25, vpc=vpc, vpc_subnets=[ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_EGRESS)] )1.    Retrieve all the child objects of a L3 construct in an Amazon EKS cluster by using the node.find_all() attribute:for child in eks_object.node.find_all(): print(child.node.id)After using the preceding command, all child IDs of the L3 construct print.Example printout:HelloEKS...NodegroupDefaultCapacityNodeGroupRole...2.    After printing the child IDs of the L3 construct, retrieve the desired child ID by using the node.find_child() attribute:Important: Make sure to check all AWS Command Line Interface (AWS CLI) commands and replace all instances of example strings with your values. For example, replace example_child_id with your target child ID.l2_nodeGroup = eks_object.node.find_child(example_child_id)After using this command, the L2 construct of the desired child ID prints.Example printout:<aws_cdk.aws_eks.Nodegroup object at 0x7ffa9c7b2910>Note: You can use variables l2_nodeGroup to invoke the Nodegroup properties, attributes, and methods to modify the associated resources.3.    Retrieve all the child objects of the L2 construct by using the node.find_all() attribute:for child in l2_nodeGroup.node.find_all(): print(child.node.id)After using the preceding command, all child IDs of the L2 construct will print.Example printout:NodegroupDefaultCapacityNodeGroupRoleImportNodeGroupRole4.    After the child IDs of the L2 construct print, retrieve the desired child ID by using the node.find_child() attribute:l2_nodeGroup_role = l2_nodeGroup.node.find_child(example_child_id) print(l2_nodeGroup_role)After using the preceding command, an object at the L2 layer will return at the aws_iam.Role level.5.    When you are at the aws_iam.Role level, use the following node.default_child attribute to reach the L1 CfnRole construct object:l1_nodeGroup_role = l2_nodeGroup_role.node.default_child print(l1_nodeGroup_role)After using the preceding command, the default child at the L1 layer will return.Note: When you use node.find_all() or node.default_child to retrieve child objects, you can use that construct's functionalities for increased controls over a CloudFormation template.If you still can't retrieve child objects, contact AWS Support or create a new issue at the GitHub website for AWS CDK issues.Related informationHow do I customize a resource property value when there is a gap between CDK higher level constructs and a CloudFormation resource?Follow"
https://repost.aws/knowledge-center/cdk-retrieve-construct-objects
"Do Classic Load Balancers, Application Load Balancers, and Network Load Balancers support SSL/TLS session resumption?"
"I want to know if Classic Load Balancers, Application Load Balancers, and Network Load Balancers support Secure Sockets Layer/Transport Layer Security (SSL/TLS) session resumption."
"I want to know if Classic Load Balancers, Application Load Balancers, and Network Load Balancers support Secure Sockets Layer/Transport Layer Security (SSL/TLS) session resumption.ResolutionAll types of load balancers support SSL/TLS session resumption. However, the connection methods that they support varies.SSL/TLS connection methodsThere are two types of TLS handshakes: full and abbreviated. The full handshake is performed only once. After the handshake, the client establishes an SSL/TLS session with the server. On subsequent connections, the abbreviated handshake is used to resume the previously negotiated session more quickly.There are two ways to establish or resume a TLS connection:SSL session IDs – This method is based on both the client and server keeping session security parameters for a period of time after a fully negotiated connection ends. A server that intends to use session resumption assigns a unique identifier for the session, called the session ID. The server then returns the session ID to the client in the ServerHello message. To resume an earlier session, the client must submit the appropriate session ID in its ClientHello message. If the server finds the corresponding session in its cache and accepts the request, then the server returns the samesession identifier. The server then continues with the abbreviated SSL handshake. Otherwise, the server issues a new session identifier and switches to a full handshake.SSL session tickets – This method doesn't require server-side storage. The server gathers all session data, encrypts it, and then returns it to the client in the form of a ticket. On subsequent connections, the client submits the ticket back to the server. Then, the server checks the ticket integrity, decrypts the contents, and uses the information in it to resume the session. If the server or client doesn't support this extension, then fall back to the session identifier mechanism built into SSL.Supported SSL/TLS connection methods for each load balancer typeClassic Load BalancersClassic Load Balancers support session ID-based SSL/TLS session resumption but don't support session ticket-based SSL session resumption. SSL session caching is supported at the node level. For example, suppose that a client connects to node B using the SSL session ID received from node A. When this happens, the SSL handshake reverts to a full handshake. After that, a new SSL session ID is generated by node B.Application Load BalancersApplication Load Balancers support both session ID and session ticket-based SSL session resumption. Both session IDs and session tickets are supported at the node level. For example, suppose that a client connects to node B using the SSL session ID or session ticket received from node A. When this happens, the SSL handshake reverts to a full handshake. After that, a new SSL session ID and session ticket are generated by node B.Network Load BalancersNetwork Load Balancers support only session tickets for session resumption. Resumption using session tickets is supported at the Regional level. Clients can resume TLS sessions with a Network Load Balancer using any of its IP addresses.Follow"
https://repost.aws/knowledge-center/elb-ssl-tls-session-resumption-support
How do I share manual Amazon RDS DB snapshots or Aurora DB cluster snapshots with another AWS account?
I want to share manual Amazon Relational Database Service (Amazon RDS) DB snapshots or Amazon Aurora DB cluster snapshots with another account.
"I want to share manual Amazon Relational Database Service (Amazon RDS) DB snapshots or Amazon Aurora DB cluster snapshots with another account.Short descriptionYou can share manual DB snapshots with up to 20 AWS accounts. You can start or stop sharing manual snapshots by using the Amazon RDS console, except for the following limitations:You can't share automated Amazon RDS snapshots with other AWS accounts. To share an automated snapshot, copy the snapshot to make a manual version, and then share that copy.You can't share manual snapshots of DB instances that use custom option groups with persistent or permanent options. For example, this includes Transparent Data Encryption (TDE) and time zone.You can share encrypted manual snapshots that don't use the default Amazon RDS encryption key. But you must first share the AWS Key Management Service (AWS KMS) key with the account that you want to share the snapshot with. To share the key with another account, share the AWS Identity and Access Management (IAM) policy with the primary and secondary accounts. You can't restore shared encrypted snapshots directly from the destination account. First, copy the snapshot to the destination account by using an AWS KMS key in the destination account. Then, restore the copied snapshot.To share snapshots that use the default AWS managed key for Amazon RDS (aws/rds), encrypt the snapshot by copying it with a customer managed key. Then, share the newly created snapshot.You can share snapshots across AWS Regions. First share the snapshot, and then copy the snapshot to the same Region in the destination account. Then, copy the snapshot to another Region.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Open the Amazon RDS console.In the navigation pane, choose Snapshots.Choose the DB snapshot that you want to copy.Choose Actions, and then choose Share Snapshot.Choose the DB snapshot visibility:Public allows all AWS accounts to restore a DB instance from your manual DB snapshot.Private allows only AWS accounts that you specify to restore a DB instance from your manual DB snapshot.In the AWS Account ID field, enter the ID of the AWS account that you want to permit to restore a DB instance from your manual DB snapshot. Then, choose Add.Note: You can repeat this step to share snapshots with up to 20 AWS accounts.Choose Save.To stop sharing a snapshot with an AWS Account, select the Delete check box next to the account ID from the Manage Snapshot Permissions pane.Choose Save.You can restore a DB instance or DB cluster from a shared snapshot by using the AWS CLI or Amazon RDS API. To do this, you must specify the full Amazon Resource Name (ARN) of the shared snapshot as the snapshot identifier.Related informationSharing a DB snapshotCreating a DB snapshotRestoring from a DB snapshotFollow"
https://repost.aws/knowledge-center/rds-snapshots-share-account
Why is my AWS DMS task in an error status?
"My AWS Database Migration Service (AWS DMS) task is in an error status. What does an error status mean, and how can I troubleshoot and resolve the error?"
"My AWS Database Migration Service (AWS DMS) task is in an error status. What does an error status mean, and how can I troubleshoot and resolve the error?Short descriptionAn AWS DMS task that is in an error status means that one or more of the tables in the task couldn't be migrated. A task in an Error status continues to load other tables from the selection rule, but a failed task stops with fatal errors.ResolutionTo identify the table that has an error, open the AWS DMS console.Choose Database migration tasks from the navigation pane.In the Tables errored column, the number of tables that have errors are listed.Choose the name of the task that has an error status.From the Table statistics section, check the Load state column to see which table names have errors. Or you can run describe-table-statistics.To troubleshoot error messages further, turn on Amazon CloudWatch logging. If you haven't turned on logging, stop the task, and modify the task to turn on logging. Then, restart the task.From the Logs page for your task, filter the timestamps for events that have ]E: and ]W: in them.After you resolve the errors, reload the tables, or restart the task for to the error status to be resolved.Related informationCommon errors for AWS DMSMigration strategy for relational databasesFollow"
https://repost.aws/knowledge-center/dms-task-error-status
Why isn’t my EBS volume increase reflected in my OS or Disk Management?
"I increased the size of my Amazon Elastic Block Store (Amazon EBS) volume, but the change isn’t reflected in my operating system or Disk Management."
"I increased the size of my Amazon Elastic Block Store (Amazon EBS) volume, but the change isn’t reflected in my operating system or Disk Management.ResolutionModifying an EBS volume requires two steps. First, increase the size of the EBS volume using the Amazon Elastic Compute Cloud (Amazon EC2) console or the AWS Command Line Interface (AWS CLI). Then, extend the volume’s file system to use the new storage capacity. For instructions, see:Extend a Windows file system after resizing a volumeExtend a Linux file system after resizing a volumeNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Related informationRequirements when modifying volumesMonitor the progress of volume modificationsMap disks to volumes on your Windows instanceInitialize Amazon EBS volumes on LinuxInitialize Amazon EBS volumes on WindowsFollow"
https://repost.aws/knowledge-center/ebs-volume-increase-os
How does ELB DNS and traffic flow operate with different cross-zone load balancing configurations?
"I configured my Elastic Load Balancer (ELB) for two Availability Zones, but it shows only one IP address in DNS."
"I configured my Elastic Load Balancer (ELB) for two Availability Zones, but it shows only one IP address in DNS.Short descriptionWith Application Load Balancers, cross-zone load balancing is always turned on at load balancer level. Cross-zone load balancing can't be turned off, but it can be changed at the target group level.However, with Network Load Balancers and Gateway Load Balancers, cross-zone load balancing is turned off by default.When cross-zone load balancing is turned off, an Availability Zone must have at least one healthy target in each target group. When cross-zone load balancing is turned on, there must be at least one healthy target in each target group in any Availability Zone. Each condition keeps the Availability Zone healthy, and adds the corresponding Elastic Load Balancer node IP address to the Elastic Load Balancer DNS.ResolutionWhen cross-zone load balancing is turned offThe following is an example of when cross-zone load balancing is turned off between two Availability Zones, AZ1 and AZ2.Availability Zone 1 (AZ1) has two target groups, A and B, each with its own target, A1 and B1. Target A1 is unhealthy and target B1 is healthy. Because target A1 is unhealthy, AZ1 is also unhealthy.Availability Zone 2 (AZ2) also has two target groups, A and B, each with its own target, A2 and B2. Targets A2 and B2 are both healthy. Because each target in both target groups is healthy, AZ2 is healthy.The Elastic Load Balancer includes the IP address of AZ2 in the Elastic Load Balancer's DNS because AZ2 is the only healthy Availability Zone. As a result, when you resolve the domain of the Elastic Load Balancer, the IP address of AZ2 is the only one that appears.Traffic then gets routed through the Elastic Load Balancer node in AZ2 to the healthy target in the corresponding target group. If there are multiple healthy targets in a target group, then one target is selected based on the routing algorithm of the load balancer.If both Availability Zones are unhealthy, then the Elastic Load Balancer fails open. Each Elastic Load Balancer IP address is then added to the DNS of the load balancer.When cross-zone load balancing is turned onThe following is an example of when cross-zone load balancing is turned on using the same Availability Zones, AZ1 and AZ2:In AZ1, target A1 is unhealthy and target B1 is healthy. In AZ2, both targets A2 and B2 are healthy. Because each Availability Zone has at least one healthy target, Elastic Load Balancer includes both IP addresses in the DNS for the Elastic Load Balancer hostname.Traffic then gets routed to any of the Elastic Load Balancer nodes and forwarded to the targets in the corresponding target groups. If there are multiple healthy targets in a target group, then a target is selected based on the routing algorithm of the load balancer.If target B1 in AZ1 is unhealthy and target B2 in AZ2 is also unhealthy, then both Availability Zones are unhealthy. Because neither Availability Zone is healthy, the Elastic Load Balancer fails open. Each Elastic Load Balancer IP address is then added to the DNS of the load balancer. As a result, when you resolve the domain, the IP addresses for both Availability Zones appear.Related informationApplication Load Balancers now support turning off cross zone load balancing per target groupFollow"
https://repost.aws/knowledge-center/elb-dns-cross-zone-balance-configuration
Why is AWS Global Accelerator failing health checks with endpoints?
I want to know why the endpoints registered to my AWS Global Accelerator aren't healthy.
"I want to know why the endpoints registered to my AWS Global Accelerator aren't healthy.Short descriptionFor Standard Accelerators, AWS Global Accelerator automatically checks the health of the endpoints that are associated with your static IP addresses. It then directs the user traffic only to healthy endpoints.Global Accelerator supports four types of endpoints for Standard Accelerators: Amazon Elastic Compute Cloud (EC2) Instance, Elastic IP address, the Application Load Balancer, and the Network Load Balancer.You can specify health check options in Global Accelerator when configuring your endpoints. However, the accelerator uses this configuration only for EC2 and Elastic IP endpoint types. For the Application Load Balancer and the Network Load Balancer endpoints, Global Accelerator reuses the already-configured health checks associated with those endpoints.However, health checks might fail for the different endpoint types supported by Global Accelerator. To resolve these failures, review these solutions.ResolutionIdentify your endpoint type. Then, follow the steps in that section to review the health check status.Endpoint type: EC2 Instance or Elastic IP address1.    Log in to the Global Accelerator console.2.    Choose an accelerator for a health check from the list of accelerators.3.    Under Listeners, choose the listener that you want to review.4.    Under Endpoint groups, open the health check details.5.    Review these health check details: the path, the port, and the protocol associated with the endpoint group.6.    Locate the section labeled Endpoints. This section shows whether the endpoint passed or failed the health check. The section flags a failed health check status.7.    If the endpoint health check failed, then make sure the Firewall, Security Groups (SG), and Network Access Control List (NACL) have access to the Amazon Route 53 health checker IP addresses and the appropriate health check port.Global Accelerator requires that your router and firewall rules allow inbound traffic from the IP addresses associated with Route 53 health checkers. This lets the accelerator complete the health checks for the EC2 Instance or the Elastic IP address endpoints. The health check fails if the port or the IP addresses are blocked. The accelerator reports these endpoints as unhealthy. For more information about approved IP addresses, see IP address ranges of Amazon Route 53 servers.8.    Make sure you have a TCP, HTTP, or HTTPS server at your endpoint for health checks, irrespective of UDP or TCP listeners. Then, follow these steps:Check whether the application is listening on the required port and IP address (for health check ports and application ports) using the netstat command. If the application isn't listening on the IP address and port, then configure your application and make sure that it's working locally on the instance.On Windows: netstat -ano | findstr endpoint_IP_address:portOn Linux: netstat -anp | grep endpoint_IP_address:portNote: Replace endpoint_IP_address:port with your endpoint's IP address and port number.Use these tools to check the connectivity to the endpoints on the health check ports. The tests must be successful without any errors on all endpoints and target application instances. Make sure that the application can accept the configured health check requests according to these settings:For TCP health checks: telnet endpoint_IP_address health_check_portFor HTTP health checks: curl -vko /dev/null http://endpoint_IP_address:portFor HTTPS health checks: curl -vko /dev/null https://endpoint_IP_address:portNote: Replace endpoint_IP_address with your endpoint's IP address and health_check_port with the associated port number.Check if iptables (for Linux) and firewall (for Windows) are dropping the application traffic.Endpoint type: Application Load Balancer or Network Load BalancerIf the endpoint type is an Application Load Balancer or a Network Load Balancer, then Global Accelerator uses the load balancer's health check information to determine the health of the endpoints. There are a few unique considerations for how Global Accelerator calculates health for these endpoints:Application Load Balancer considerations1.    All target groups in your Application Load Balancer must be healthy for Global Accelerator to consider the load balancer healthy. For more information on how to configure the target group in the Application Load Balancer, see Target group health.2.    If ALL target groups are empty, then Global Accelerator considers the Application Load Balancer unhealthy.Network Load Balancer considerations1.    All target groups in your Network Load Balancer must be healthy for Global Accelerator to consider the load balancer healthy. For more information on how to configure the target group in the Network Load Balancer, see Target group health. 2.    If ANY single target group is empty, then Global Accelerator considers the Network Load Balancer unhealthy.Refer to the following articles if the Elastic Load Balancing (ELB) target groups are reporting unhealthy results:See How do I troubleshoot and fix failing health checks for Application Load Balancers? to make sure that the Application Load Balancer's targets are healthy.See Troubleshoot Your Network Load Balancer to make sure that the Network Load Balancer's targets are healthy.Related informationChanging health check optionsFollow"
https://repost.aws/knowledge-center/global-accelerator-unhealthy-endpoints
How do I troubleshoot helper scripts that won't bootstrap in a CloudFormation stack with Windows instances?
My helper scripts won't bootstrap in an AWS CloudFormation stack with Microsoft Windows instances. How do I resolve this issue?
"My helper scripts won't bootstrap in an AWS CloudFormation stack with Microsoft Windows instances. How do I resolve this issue?Short descriptionIn a Windows instance, UserData that runs as a child process of EC2ConfigService invokes cfn-init.exe. Certain steps that cfn-init.exe performs might require a system reboot.For example, you must reboot the system if you rename a computer, or join a computer to a domain. After the system reboots, cfn-init continues to run the rest of the configurations that remain in AWS::CloudFormation::Init with the help of C:\cfn\cfn-init\resume_db.json.For helper scripts that don't run after rebooting the Windows instance, complete the steps in the Troubleshoot bootstrapping issues section.If you receive the following error, then you might have issues with cfn-signal:"Received 0 conditions when expecting X or Failed to receive X resource signal(s) within the specified duration"To resolve the preceding error, complete the steps in the Troubleshoot cfn-signal issues section.For both issues, complete the steps in the Follow the best practices for using a Windows operating system with CloudFormation section.ResolutionTroubleshoot bootstrapping issuesIf your script doesn't run after reboot, then complete the following steps:1.    In the commands section of your cfn-init configuration set, confirm that waitAfterCompletion is set to forever. For example:"commands": { "0-restart": { "command": "powershell.exe -Command Restart-Computer", "waitAfterCompletion": "forever" } }Note: The forever value directs cfn-init to exit and resume only after the reboot is complete. For more information, see AWS::CloudFormation::Init.2.    Check the following logs for errors:Amazon Elastic Compute Cloud (Amazon EC2) configuration log at C:\Program Files\Amazon\Ec2ConfigService\Logs\Ec2ConfigLog.txt (versions before Windows 2016)Amazon EC2 configuration log at *C:\ProgramData\Amazon\EC2-Windows\Launch\Log* (Windows 2016 and later)cfn-init log at C:\cfn\log\cfn-init.logWindows Event logs at C:\Windows\System32\winevt\logsTroubleshoot cfn-signal issues1.    Confirm that the cfn-signal is configured correctly.Important: Use -e $lastexitcode in PowerShell scripts, and use -e %ERRORLEVEL% for Windows cmd.exe scripts.For PowerShell scripts in UserData, use the following tags:<powershell></powershell>Example PowerShell script:UserData: Fn::Base64: Fn::Sub : | <powershell> $LASTEXITCODE=0 echo Current date and time >> C:\Temp\test.log echo %DATE% %TIME% >> C:\Temp\test.log cfn-init.exe -s ${AWS::StackId} -r SInstance --region ${AWS::Region} New-Item -Path "C:\" -Name userdata -ItemType directory cfn-signal.exe -e $LASTEXITCODE --stack ${AWS::StackId} --resource WindowsInstance --region ${AWS::Region} </powershell>For cmd.exe scripts in UserData, use the following tags:<script></script>Example cmd.exe script:UserData: Fn::Base64: !Sub | <script> cfn-init.exe -v -s ${AWS::StackId} -r WindowsInstance --configsets ascending --region ${AWS::Region} cfn-signal.exe -e %ERRORLEVEL% --stack ${AWS::StackId} --resource WindowsInstance --region ${AWS::Region} </script>2.    Increase the Timeout of the WaitCondition to 1800/3600 seconds based on the example of bootstrapping a Windows stack.Note: Step 2 is necessary because Windows instances usually take longer than Linux instances to complete their initial boot process.3.    If you're using a custom Amazon Machine Image (AMI), then you must use Sysprep to create the AMI before you start. If you don't use Sysprep, then you might experience metadata issues and get the following error in the user data log for metadata:Failed to get metadata: The result from http://169.254.169.254/latest/user-data was emptyUnable to execute userdata: Userdata was not providedFollow the best practices for using a Windows operating system with CloudFormationInclude $ErrorActionPreference = "Stop" at the top of your PowerShell scripts to catch exceptions.When you refer to a Windows path in your CloudFormation template, you must add a forward slash (/) at the beginning of the path. For example:"commands" : { "1-extract" : { "command" : "C:\\SharePoint\\SharePointFoundation2010.exe /extract:C:\\SharePoint\\SPF2010 /quiet /log:C:\\SharePoint\\SharePointFoundation2010-extract.log" }For Windows stacks, you must base64-encode the WaitCondition handle URL again. For example:cfn-signal.exe -e %ERRORLEVEL% ", { "Fn::Base64" : { "Ref" : "SharePointFoundationWaitHandle" }}, "\n"Follow"
https://repost.aws/knowledge-center/cloudformation-helper-scripts-windows
How do I change the WorkSpace that I'm connecting to through the Amazon WorkSpaces client?
I want to change the WorkSpace that I connect to through the Amazon WorkSpaces client application. How can I do this?
"I want to change the WorkSpace that I connect to through the Amazon WorkSpaces client application. How can I do this?ResolutionTo change the WorkSpace that you connect to, follow these steps:Retrieve the user name and registration code for the new WorkSpace from your invitation email. If you haven't already completed the registration process by opening the link in your invitation email, then do so now. If you can't find your invitation email, ask your administrator to resend it.Note: All WorkSpaces that are launched from the same directory have the same registration code.Verify that you are using the latest Amazon WorkSpaces client. Then, open the client application.On the client login window, choose Change Registration Code. For more information, see WorkSpaces Windows client application or WorkSpaces macOS client application.The dropdown list displays all saved registration codes and the associated Region. You can select a different registration code from the list, or clear the text box and add a new registration code. Then, choose Continue.Sign in to the WorkSpace using the user name and password associated with the WorkSpace.To rename or remove a saved registration code, choose Settings, then choose Manage Login Information.Related informationGet started with WorkSpaces Quick SetupFollow"
https://repost.aws/knowledge-center/change-workspace-client
Why was I charged by Amazon Web Services when I don't have an AWS account?
"I received a bill for Amazon Web Services, but I don't have an AWS account. Why was I billed?"
"I received a bill for Amazon Web Services, but I don't have an AWS account. Why was I billed?ResolutionFirst, make sure that the charges were from Amazon Web Services.If your payment method is billed, then that payment method is associated with an AWS account with running resources. Here are some of the most common reasons why you might be billed for an account that you don't remember creating:Someone is using your payment method without your permission. If you're the individual owner of the payment method and don't have an AWS account, then someone might be using your payment information without permission. Speak with your credit card issuer to dispute the charges.Note: AWS Support can't resolve charges if you're not affiliated with an AWS account. For a stolen credit card or similar situation, contact your financial institution.A test account's Free Tier promotion expired. The AWS Free Tier covers new accounts for a year, allowing new customers to test many AWS services for free. Any resources left running after the Free Tier expires are billed at the On-Demand rates.You or another authorized user of the card might have forgotten that the account has running resources on it. For more information, see I unintentionally incurred charges while using the Free Tier. How do I make sure that I'm not billed again?Someone in your organization created the account. Check with other authorized users of the card to see if they've opened an AWS account using the card. This might be a spouse or another family member. If this is a corporate credit card, check with technical department heads within your organization. Common users of AWS include application or service developers, website designers, or systems administrators.A contractor or third party is using AWS to provide you a service. A contractor or third party that you hired might be building your website or app using AWS. Contact the third party for more details about these charges.If you need to contact AWS Support, use the I'm an AWS customer and I'm looking for billing or account support form. You can use this form to submit a case if you don't have an AWS account, or if you're unable to sign in. If you can't resolve your concern with the information provided, respond to the email you receive from AWS Support.Important: AWS Support can't discuss any account-related information if account security isn't verified.Related informationI can't sign in because my credentials don't workAvoiding unexpected chargesFollow"
https://repost.aws/knowledge-center/charges-from-unknown-account
Why did the DeliveryFrequency of my AWS Config configuration snapshot change?
"The deliveryFrequency of my AWS Config configuration snapshot changed, and there are PutDeliveryChannel events logged in AWS CloudTrail. However, I didn't change the AWS Config delivery channel. What caused this change?"
"The deliveryFrequency of my AWS Config configuration snapshot changed, and there are PutDeliveryChannel events logged in AWS CloudTrail. However, I didn't change the AWS Config delivery channel. What caused this change?ResolutionThe frequency that AWS Config delivers configuration snapshots is controlled by:The deliveryFrequency parameter in the PutDeliveryChannel API using the AWS Command Line Interface (AWS CLI).The MaximumExecutionFrequency parameter in the PutConfigRule API set with creating or changing AWS Config periodic rules.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.You might see PutDeliveryChannel events logged in CloudTrail if:The deliveryFrequency parameter isn't configured for the delivery channel.The value of the MaximumExecutionFrequency parameter is less than the deliveryFrequency value set on the delivery channel.If no value for the deliveryFrequency is set, then AWS Config calls the PutDeliveryChannel API to update the MaximumExecutionFrequency value for the periodic rule.You can view the deliveryFrequency using the DescribeDeliveryChannnels command similar to the following:$ aws configservice describe-delivery-channels --region your-region{ "DeliveryChannels": [ { "configSnapshotDeliveryProperties": { "deliveryFrequency": "Twelve_Hours" }, "name": "default", "s3BucketName": "config-bucket-123456789012-your-region" } ]}You can view the PutDeliveryChannel API using AWS CloudTrail similar to the following:"eventSource": "config.amazonaws.com", "eventName": "PutDeliveryChannel", "awsRegion": "your-region", "sourceIPAddress": "192.0.2.0", "userAgent": "console.amazonaws.com", "requestParameters": { "deliveryChannel": { "name": "default", "configSnapshotDeliveryProperties": { "deliveryFrequency": "Twelve_Hours" }, "s3BucketName": "config-bucket-123456789012-your-region" } },Related informationHow can I recreate an AWS Config delivery channel?Follow"
https://repost.aws/knowledge-center/config-deliveryfrequency-change
How can I concatenate Parquet files in Amazon EMR?
"I'm using S3DistCp (s3-dist-cp) to concatenate files in Apache Parquet format with the --groupBy and --targetSize options. The s3-dist-cp job completes without errors, but the generated Parquet files are broken. When I try to read the Parquet files in applications, I get an error message similar to the following:"Expected n values in column chunk at /path/to/concatenated/parquet/file offset m but got x values instead over y pages ending at file offset z""
"I'm using S3DistCp (s3-dist-cp) to concatenate files in Apache Parquet format with the --groupBy and --targetSize options. The s3-dist-cp job completes without errors, but the generated Parquet files are broken. When I try to read the Parquet files in applications, I get an error message similar to the following:"Expected n values in column chunk at /path/to/concatenated/parquet/file offset m but got x values instead over y pages ending at file offset z"Short descriptionS3DistCp doesn't support concatenation for Parquet files. Use PySpark instead.ResolutionYou can't specify the target file size in PySpark, but you can specify the number of partitions. Spark saves each partition to a separate output file. To estimate the number of partitions that you need, divide the size of the dataset by the target individual file size.1.    Create an Amazon EMR cluster with Apache Spark installed.2.    Specify how many executors you need. This depends on cluster capacity and dataset size. For more information, see Best practices for successfully managing memory for Apache Spark applications on Amazon EMR.$ pyspark --num-executors number_of_executors3.    Load the source Parquet files into a Spark DataFrame. This can be an Amazon Simple Storage Service (Amazon S3) path or an HDFS path. For example:df=sqlContext.read.parquet("s3://awsdoc-example-bucket/parquet-data/")HDFS:df=sqlContext.read.parquet("hdfs:///tmp/parquet-data/")4.    Repartition the DataFrame. In the following example, n is the number of partitions.df_output=df.coalesce(n)5.    Save the DataFrame to the destination. This can be an Amazon S3 path or an HDFS path. For example:df_output.write.parquet("URI:s3://awsdoc-example-bucket1/destination/")HDFS:df=sqlContext.write.parquet("hdfs:///tmp/destination/")6.    Verify how many files are now in the destination directory:hadoop fs -ls "URI:s3://awsdoc-example-bucket1/destination/ | wc -l"The total number of files should be the value of n from step 4, plus one. The Parquet output committer writes the extra file, called _SUCCESS.Follow"
https://repost.aws/knowledge-center/emr-concatenate-parquet-files
How can I check the amount of backup storage being used by my Aurora PostgreSQL-Compatible DB instances?
I want to check the amount of storage used by the backup of my Amazon Aurora PostgreSQL-Compatible Edition DB instances.
"I want to check the amount of storage used by the backup of my Amazon Aurora PostgreSQL-Compatible Edition DB instances.ResolutionNote: If you receive errors when running AWS Command Line Interface commands, make sure that you’re using the most recent version of the AWS CLI.To check the storage usage of your Aurora DB instances backup, you can use these Amazon CloudWatch metrics:BackupRetentionPeriodStorageUsed - the amount of backup storage used for continuous backups at a given time.SnapshotStorageUsed - the amount of backup storage used for storing manual snapshots beyond the backup retention period.TotalBackupStorageBilled - the sum of BackupRetentionPeriodStorageUsed and SnapshotStorageUsed minus the mount of free backup storage.For more information on using these metrics, see Understanding Amazon Aurora backup storage usage.For more information on accessing these metrics using Amazon CloudWatch, see Monitoring Amazon Aurora metrics with Amazon CloudWatch.You can also using the AWS CLI to view Amazon CloudWatch metrics about your Aurora instances. For instructions on how to view these metrics in the AWS CLI, see Viewing DB cluster metrics in the CloudWatch console and AWS CLI.Note: Aurora databases don't have the FreeStorageSpace metric.Preformatted code block (delete if article contains no code)Related informationOther considerations for Aurora backup storage and pricingFollow"
https://repost.aws/knowledge-center/aurora-backup-storage-usage
How do I troubleshoot issues with an Amazon Cognito user's email_verified attribute?
I want to resolve any issues with the email_verified attribute for Amazon Cognito users.
"I want to resolve any issues with the email_verified attribute for Amazon Cognito users.Short descriptionAn Amazon Cognito user pool has a set of standard attributes that are used to identify individual users. The email_verified attribute that indicates whether a user's email address has been verified can change in the following situations:A user updates their email address. When a user updates their email address, Amazon Cognito changes the email_verified attribute to unverified.An email address is configured as an alias. Then, a user with a duplicate email address is created. When an email address is set as an alias, only one user can hold the email address value as the email_verified attribute. If the newer user's account confirmation succeeds, then the email address alias is transferred to the newer user. The former user's email address is then unverified. For more information, see User pool attributes and review the Customizing sign-in attributes section.A federated user or a user linked to a federated user signs in with an email mapping. When a federated user signs in, a mapping must be present for each user pool attribute that your user pool requires. If an email attribute is mapped, the email_verified attribute changes to unverified by default.ResolutionTo resolve issues with the email_verified attribute, follow the steps that apply to your situation.Important: In the following example AWS Command Line Interface (AWS CLI) commands, replace all instances of example strings with your values. (For example, replace "example_access_token" with your access token value.)Verification after an email address updateTo verify the email address after a user update:1.    For Amazon Cognito to send the verification code to an updated email address, configure the email verification setting for the user pool.2.    If necessary, update the email address by calling the UpdateUserAttributes API or the AdminUpdateUserAttributes API.An example update-user-attributes command:aws cognito-idp update-user-attributes --access-token "example_access_token" --user-attributes Name="email",Value="example_new_email"An example admin-update-user-attributes command:aws cognito-idp admin-update-user-attributes --user-pool-id "example_user_pool_id" --username "example_username" --user-attributes Name="email",Value="example_new_email"Important: The AdminUpdateUserAttributes API can also be used to automatically verify the email by setting the email_verified attribute to True. If the email address is automatically verified with the AdminUpdateUserAttributes API, the next step isn't necessary. The next step is necessary when using the UpdateUserAttributes API.3.    Check your new email inbox for the verification code.4.    Call the VerifyUserAttribute API. Specify the parameters for AccessToken and AttributeName as "email" and enter the verification code from the previous step.An example verify-user-attribute command:aws cognito-idp verify-user-attribute --access-token "example_access_token" --attribute-name "email" --code "example_verification_code"To verify the email address after the initial code expires:1.    Sign in to your application with your user name to retrieve your access token.2.    Call the GetUserAttributeVerificationCode API. Set the AttributeName parameter as "email".An example get-user-attribute-verification-code command:aws cognito-idp get-user-attribute-verification-code --access-token "example_access_token" --attribute-name "email"3.    Call the VerifyUserAttribute API. Specify the parameters for AccessToken and AttributeName as "email". Enter the verification code from the previous step.Confirm a new user with a duplicate email addressTo allow the confirmation of a new user with a duplicate email address:1.    If necessary, call the SignUp API to sign up a user with a configured email address.An example sign-up command:aws cognito-idp sign-up --client-id "example_client_id" --username "example_username" --password "example_password" --user-attributes Name="email",Value="example_user_email"2.    Call the ConfirmSignUp API with the ForceAliasCreation parameter set to True.An example confirm-sign-up command:aws cognito-idp confirm-sign-up --client-id "example_client_id" --username "example_username" --confirmation-code "example_confirmation_code" --force-alias-creationTo deny the confirmation of a new user with a duplicate email address after sign up:1.    Call the ConfirmSignUp API with the ForceAliasCreation parameter set to False.Note: ForceAliasCreation is False by default. Therefore, it's not required to be passed as a parameter in the request.An example deny-sign-up command:aws cognito-idp confirm-sign-up --client-id "example_client_id" --username "example_username" --confirmation-code "example_confirmation_code" --no-force-alias-creation2.    By setting the ForceAliasCreation parameter to False, the API returns the following error:An error occurred (AliasExistsException) when calling the ConfirmSignUp operation: An account with the email already exists.Create a new user with a duplicate email address as an administratorTo create a new user with a duplicate email address as an administrator:1.    Call the AdminCreateUser API with a configured email address, with the email_verified attribute set to True and the ForceAliasCreation parameter set to True.An example admin-create-user command:aws cognito-idp admin-create-user --user-pool-id "example_user_pool_id" --username "example_username" --user-attributes Name="email",Value="example_user_email" Name="email_verified",Value="True" --force-alias-creationMap the email_verified attribute to a third-party identity provider (IdP)To keep the email_verified attribute verified after federation:1.    From the Amazon Cognito console, map the IdP attribute for verification status to the email_verified attribute.Note: Most OpenID Connect (OIDC) providers include the email_verified attribute.Related informationVerifying updates to email addresses and phone numbersFollow"
https://repost.aws/knowledge-center/cognito-email-verified-attribute
How can I set up a Direct Connect public VIF?
I want to set up a AWS Direct Connect public VIF.
"I want to set up a AWS Direct Connect public VIF.Short descriptionA public VIF uses a public IP address to access all AWS public services such as Amazon Elastic Compute Cloud (Amazon EC2).Note: Public VIFs can't be used to access the Internet.ResolutionFollow these instructions to set up a AWS Direct Connect public VIF based on your scenario.IPv4 address allocation and addressing and Border Gateway Protocol (BGP) Autonomous System Number (ASN)For IPv4 addresses, use one of the following options:Use a public IPv4 CIDR block that you own.If you don't own a public IPv4 block, then check with your partner in the AWS Direct Connect Partner Program or ISP to see if they can provide you with a public IPv4 CIDR. Be sure to include the LOA-CFA authorization form stating that they authorize you to use those public IP prefixes.You can also contact AWS Support to request a public IPv4 CIDR. Be sure to provide your use case. Note that AWS can't guarantee approval for all public IPv4 CIDR requests. For more information, see Prerequisites for virtual interfaces.A public or private BGP ASN for your side of the BGP session. If you are using a public ASN, you must own it. If you are using a private ASN, it must be in the 1 to 2147483647 range. Autonomous System (AS) prepending does not work if you use a private ASN for a public virtual interface.Note: For IPv6 addresses, AWS automatically allocates you a /125 IPv6 CIDR. You can't specify your own peer IPv6 addresses.Approving prefixes and BGP ASN over public VIFWhen you create a public virtual interface, the following information is subject to approval by the Direct Connect team:The BGP Autonomous System Number (only if it's a public ASN)The Public peer IP addressesThe Public prefixes that you plan to advertise over the virtual interfaceIf you advertised the prefixes before they are approved, you might need to clear the BGP session and then re-advertise the prefixes after approval.For more information, see My Direct Connect public virtual interface is stuck in the "Verifying" state. How can I get it approved?Advertising prefixes over public VIFYou must advertise at least one public prefix using BGP.The public IP addresses used for peering and public IP addresses advertised can't overlap with other public IP addresses announced or used in Direct Connect. You can verify ownership of BGP ASN and IP address prefixes using a WHOIS query.Example output:AS | IP | BGP Prefix | CC | Registry | Allocated | AS Name12345 | 192.0.2.0 | 192.0.2.0/24 | US | arin | 1991-12-19 | EXAMPLE-02, USAWS prefixes received on premises over public VIFAfter BGP is established over your public VIF, you receive all available local and remote AWS Region prefixes. To verify the available prefixes, check that the BGP communities on the prefixes received from AWS. For more information, see How can I control the routes advertised and received over the AWS public virtual interface with Direct Connect?AWS Direct Connect applies the following BGP communities to its advertised routes:7224:8100—Routes that originate from the AWS Region where the Direct Connect point of presence is located7224:8200—Routes that originate from the continent where the Direct Connect point of presence is locatedNo tag—Global (all public AWS Regions)Connecting to AWS over public VIFDirect Connect performs inbound packet filtering to validate that the source of the traffic originated from your advertised prefix.Be sure that you connect from a prefix that's advertising to a public VIF. You can't connect from a prefix that isn't advertised to a public VIF.Related informationHow do I connect my private network to AWS public services using an AWS Direct Connect public VIF?Follow"
https://repost.aws/knowledge-center/setup-direct-connect-vif
How can I use BGP communities to control the routes advertised and received over the AWS public virtual interface with Direct Connect?
"How can I control the routes advertised and received over the AWS public virtual interface (VIF) to a specific Region, continent, or globally?"
"How can I control the routes advertised and received over the AWS public virtual interface (VIF) to a specific Region, continent, or globally?Short descriptionAWS Direct Connect locations in AWS Regions or in the AWS GovCloud (US) Region can access public services in any AWS Region (excluding the China (Beijing) Region). Direct Connect advertises all local and remote AWS Region prefixes where available, and includes on-net prefixes from other AWS non-Region points of presence (POPs) where available, such as Amazon CloudFront. For more information, see Routing policies and BGP communities.ResolutionDirect Connect supports a range of Border Gateway Protocol (BGP) community tags to help control the scope (Regional, continent, or global) of routes advertised and received over a public VIF.Direct Connect BGP community tags that AWS advertises to your customer gateway device over the public VIF include:7224:8100—Routes that originate from the AWS Region where the Direct Connect point of presence is located.7224:8200—Routes that originate from the continent where the Direct Connect point of presence is located.No tag—Global (all public AWS Regions).If you have a public VIF in the us-east-1 Region, then AWS advertises the routes associated for public resources in us-east-1 Region with a community tag of 7224:8100. For routes for public resources in North America, AWS advertises the routes with a community tag of 7224:8200. For all other prefixes, there is no tag.Direct Connect BGP community tags that you can use to select the scope of your prefixes to AWS:7224:9100—Local AWS Region where the Direct Connect point of presence is located.7224:9200—All AWS Regions for the continent (for example, North America) where the Direct Connect point of presence is located.7224:9300 or no tag—Global (all public AWS Regions).If you have a public VIF in the us-east-1 Region, you can limit the scope of the routes you advertise to us-east-1 Region with the community tag of 7224:9100. If you tag your routes with the community tag of 7224:9200, then your prefixes are advertised to all US Regions (North America continent). If you tag your routes with the community tag of 7224:9300, or if you do not tag your prefixes with a community tag, then your prefixes are advertised to all AWS Regions.For example, to limit the routes received and advertised over the public VIF to a specific local Region, make sure that you configure a prefix filter and a route map that matches the routes received from AWS with the community tag of 7224:8100, and then install only those routes. You also must advertise your prefixes to AWS with a community tag of 7224:9100. This makes sure that the routes received and advertised over the public VIF are limited to the local Region.You can use any combination of the community tags to control the routes advertised and received over an AWS public VIF.AWS Direct Connect advertises all public prefixes with the NO_EXPORT BGP community tag.For the current list of prefixes advertised by AWS, download the AWS JSON IP address ranges. For more information, see AWS IP address ranges.Related informationRouting policies and BGP communitiesFollow"
https://repost.aws/knowledge-center/control-routes-direct-connect
How do I resolve the error "GENERIC_INTERNAL_ERROR" when I query a table in Amazon Athena?
"When I query my Amazon Athena table, I receive the error "GENERIC_INTERNAL_ERROR"."
"When I query my Amazon Athena table, I receive the error "GENERIC_INTERNAL_ERROR".Short descriptionThe different types of GENERIC_INTERNAL_ERROR exceptions and their causes are the following:GENERIC_INTERNAL_ERROR: null: You might see this exception under either of the following conditions:You have a schema mismatch between the data type of a column in table definition and the actual data type of the dataset.You're running a CREATE TABLE AS SELECT (CTAS) query with inaccurate syntax.GENERIC_INTERNAL_ERROR: parent builder is null: You might see this exception when you query a table with columns of data type array, and the SerDe format OpenCSVSerDe. OpenCSVSerde format doesn't support the array data type.GENERIC_INTERNAL_ERROR: Value exceeds MAX_INT: You might see this exception when the source data column is defined with the data type INT and has a numeric value greater than 2,147,483,647.GENERIC_INTERNAL_ERROR: Value exceeds MAX_BYTE: You might see this exception when the source data column has a numeric value exceeding the allowable size for the data type BYTE. The data type BYTE is equivalent to TINYINT. TINYINT is an 8-bit signed INTEGER in two’s complement format with a minimum value of -128 and a maximum value of 127.GENERIC_INTERNAL_ERROR: Number of partition values does not match number of filters: You might see this exception if you have inconsistent partitions on Amazon Simple Storage Service (Amazon S3) data. You might have inconsistent partitions under either of the following conditions:Partitions on Amazon S3 have changed (example: new partitions added).Number of partition columns in the table do not match that in the partition metadata.GENERIC_INTERNAL_ERROR: Multiple entries with same key: You might see this exception due to Keys (columns) in the JSON data when:The same name is used twice.The same name is used when it’s converted to all lowercase.ResolutionGENERIC_INTERNAL_ERROR:nullColumn data type mismatch: Be sure that the column data type in the table definition is compatible with the column data type in the source data. Athena uses schema-on-read technology. This means that your table definitions are applied to your data in Amazon S3 when the queries are processed.For example, when a table created on Parquet files:Athena reads the schema from the filesThen Athena validates the schema against the table definition where the Parquet file is queried.If the underlying data type of a column doesn't match the data type mentioned during table definition, then the Column data type mismatch error is shown.To resolve this issue, verify that the source data files aren't corrupted. If there is a schema mismatch between the source data files and table definition, then do either of the following:Update the schema using the AWS Glue Data Catalog.Create a new table using the updated table definition.If the source data files are corrupted, delete the files, and then query the table.Inaccurate syntax: You might get the "GENERIC INTERNAL ERROR:null" error when both of the following conditions are true:You created the table using the CTAS query.You used the same column for table properties partitioned_by and bucketed_by.To avoid this error, you must use different column names for partitioned_by and bucketed_by properties when you use the CTAS query. To resolve this error, create a new table by choosing different column names for partitioned_by and bucketed_by properties.GENERIC_INTERNAL_ERROR: parent builder is nullTo resolve this error, find the column with the data type array, and then change the data type of this column to string. To change the column data type to string, do either of the following:Update the schema in the Data Catalog.Create a new table by choosing the column data type as string.Run the SHOW CREATE TABLE command to generate the query that created the table. Then view the column data type for all columns from the output of this command. Find the column with the data type array, and then change the data type of this column to string.To update the schema of the table with Data Catalog, do the following:Open the AWS Glue console.On the navigation pane, choose Tables.Select the table that you want to update.Choose Action, and then choose View details.Choose Edit schema.Scroll to the column with data type array, and then choose array.For Column type, select string from the dropdown list.Choose Update.On the Edit schema page, choose Save.GENERIC_INTERNAL_ERROR: Value exceeds MAX_INTTo resolve this error, find the column with the data type int, and then update the data type of this column from int to bigint. To change the column data type, update the schema in the Data Catalog or create a new table with the updated schema.Run the SHOW CREATE TABLE command to generate the query that created the table. Then view the column data type for all columns from the output of this command. Find the column with the data type int, and then change the data type of this column to bigint.To update the schema of the table with Data Catalog, do the following:Open the AWS Glue console.On the navigation pane, choose Tables.Select the table that you want to update.Choose Action, and then choose View details.Choose Edit schema.Scroll to the column with data type int, and then choose int.For Column type, select bigint from the dropdown list.Choose Update.On the Edit schema page, choose Save.GENERIC_INTERNAL_ERROR: Value exceeds MAX_BYTETo resolve this error, find the column with the data type tinyint. Then, change the data type of this column to smallint, int, or bigint. Or, you can resolve this error by creating a new table with the updated schema.Run the SHOW CREATE TABLE command to generate the query that created the table. Then, view the column data type for all columns from the output of this command. Find the column with the data type tinyint, and change the data type of this column to smallint, bigint, or int.To update the schema of the table with Data Catalog, do the following:Open the AWS Glue console.In the navigation pane, choose Tables.Select the table that you want to update.Choose Action, and then choose View details.Choose Edit schema.Scroll to the column with data type tinyint, and then choose tinyinit.For Column type, select smallint, bigint, or int from the dropdown list.Choose Update.On the Edit schema page, choose Save.GENERIC_INTERNAL_ERROR: Number of partition values does not match number of filtersTo resolve this error, do either of the following:Create a new table using an AWS Glue Crawler.Drop the partitions using the ALTER TABLE DROP PARTITION statement. Then, add the same number of partitions as in the table definition with the ALTER TABLE ADD PARTITION statement. For example, suppose you have two partition columns date and country in the table definition and a partition that has only one column date. Drop the partition with the date column, and then add both partitions to the table.ALTER TABLE doc_example_table DROP PARTITION (date = '2014-05-14'); ALTER TABLE doc_example_table ADD PARTITION (date = '2016-05-14', country = 'IN');GENERIC_INTERNAL_ERROR: Multiple entries with same keyIf rows have multiple columns with the same key, pre-processing the data is required to include a valid key-value pair. If only some of the records have duplicate keys, and if you want to ignore these records, set ignore.malformed.json as SERDEPROPERTIES in org.openx.data.jsonserde.JsonSerDe.If the key names are same but in different cases (for example: “Column”, “column”), you must use mapping. This is because hive doesn’t support case sensitive columns. To do this, you must configure SerDe to ignore casing.Do the following:CREATE TABLE mytable ( time1 string, time2 string) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'WITH SERDEPROPERTIES ("case.insensitive" = "false", --tells hive to ignore key case"mapping.time1"= "time", -- lowercase 'time' mapped into 'time1'"mapping.time2"= "Time") -- uppercase to 'time2'Related informationData types in Amazon AthenaPartitioning data in AthenaFollow"
https://repost.aws/knowledge-center/athena-generic-internal-error
How can I use an SSH tunnel through AWS Systems Manager to access my private VPC resources?
I want to use an SSH tunnel through AWS Systems Manager to access my private VPC resources. How can I do this?
"I want to use an SSH tunnel through AWS Systems Manager to access my private VPC resources. How can I do this?Short descriptionTo create an SSH tunnel, you can use Session Manager, a capability of AWS Systems Manager that lets you use port forwarding for remote hosts. This feature is supported on SSM Agent versions 3.1.1374.0 and later. Port forwarding is an alternative to the steps below. For more information about remote host port forwarding, see Start a session.Session Manager uses the Systems Manager infrastructure to create an SSH-like session with an instance. Session Manager tunnels real SSH connections, allowing you to tunnel to another resource within your virtual private cloud (VPC) directly from your local machine. A managed instance that you create acts as a bastion host, or gateway, to your AWS resources.The benefits of this configuration are:Increased Security: This configuration uses only one Amazon Elastic Compute Cloud (Amazon EC2) instance (the bastion host), and connects outbound port 443 to Systems Manager infrastructure. This allows you to use Session Manager without any inbound connections. The local resource must allow inbound traffic only from the instance acting as bastion host. Therefore, there's no need to open any inbound rule publicly.Ease of use: You can access resources in your private VPC directly from your local machine.Note: For instructions to access your EC2 instances with a terminal or a single port forwarding using Systems Manager, see Setting up Session Manager.PrerequisitesComplete the Session Manager prerequisitesInstall the Session Manager plugin for the AWS Command Line Interface (AWS CLI)Allow SSH connections through Session Manager and make sure that SSH connection requirements are met.Note: You must have the following installed to use the SSH feature:1.    SSM Agent v2.3.672.0 or newer.2.    Session Manager Plugin v1.1.23 or newer on your local machine.3.    AWS CLI v1.16.12 or newer on your local machine.ResolutionTo start the SSH tunnel using Session Manager, follow these steps:Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.1.    To start the SSH tunnel, run the following command:ssh -i /path/my-key-pair.pem username@instance-id -L localport:targethost:destport2.    To test access to the tunnel on the target port created in step 1, run the following command:telnet 127.0.0.1 localportIn the preceding example, 127.0.0.1 and localport translate to access targethost:destport.Example ConfigurationsScenario 1Create a tunnel from your local machine to access a MySQL database running on an EC2 instance using the SSM host as a bastion host.Resources usedinstance1: An EC2 instance acting as a bastion host and managed by AWS Systems Manager.    Hostname = ec2-198-51-100-1.compute-1.amazonaws.com Instance id = i-0123456789abcdefainstance2: An EC2 instance running MySQL Database on the default port 3306.    Hostname = ec2-198-51-100-2.compute-1.amazonaws.comInstructions1.    From a local machine (for example, your laptop), run the SSH command to connect to instance1, using Session Manager-based SSH. This command establishes a tunnel to port 3306 on instance2, and presents it in your local machine on port 9090.ssh -i /path/key-pair_instance1.pem username_of_instance1@i-0123456789abcdefa -L 9090:ec2-198-51-100-2.compute-1.amazonaws.com:3306Note: In the preceding example, port 9090 is available on the local machine.2.    From the local machine, access the database using the available port used in step 1 (in this example, 9090).mysql -u user -h 127.0.0.1 -P 9090 -p passwordNote: Any security groups, network access control list (network ACL), security rules, or third-party security software that exist on instance2 must allow traffic from instance1. In the preceding example, instance2 must allow port 3306 access from instance1.Scenario 2Create three tunnels over a single SSH connection from your local machine to:Connect to the SSH port in instance1Access a MySQL database in RDS instanceAccess a webserver in instance3Resources usedinstance1: An EC2 instance acting as a bastion host and managed by AWS Systems Manager.    Hostname = ec2-198-51-100-1.compute-1.amazonaws.com Instance id = i-0123456789abcdefaRDS instance: A MySQL RDS instance located in a private subnet.    Hostname = DBinstanceidentifier.abcdefg12345.region.rds.amazonaws.cominstance3: An EC2 instance located in a private subnet    Hostname = ec2-198-51-100-3.compute-3.amazonaws.comInstructions1.    Start the session with three tunnels using the SSH command.Note: There are three separate tunnel invocations in the command.ssh -i /path/key-pair_instance1.pem username_of_instance1@i-0123456789abcdefa -L 8080:ec2-198-51-100-1.compute-1.amazonaws.com:22 -L 9090:DBinstanceidentifier.abcdefg12345.region.rds.amazonaws.com:3306 -L 9091:ec2-198-51-100-3.compute-1.amazonaws.com:80Note: In the preceding example, ports 8080, 9090, and 9091 are available on the local machine.2.    Access SSH from the local machine to instance1. The local port 8080 tunnels to the SSH port (22) on instance1. The key-pair and username are for the instance you are tunneling to (instance1, in this example).ssh -i /path/key-pair_instance1.pem username_of_instance1@127.0.0.1 -p 80803.    Access the database on RDS instance. The local port 9090 tunnels to port 3306 on RDS instance. You can use MySQL workbench, which allows you to access the DB server using the GUI, with 127.0.0.1 as hostname and 9090 as port. Or, run the following command in the shell command prompt:mysql -u user -h 127.0.0.1 -P 9090 -p password4.    From the local machine, to access the website on instance3, open the browser and navigate to the website.http://127.0.0.1:9091Important: Any security groups, network ACL, security rules, or third-party security software that exist on RDS instance and instance3 must allow traffic from instance1. In the preceding example, instance3 must allow port 80 access from instance1.Related informationAutomated configuration of Session Manager without an internet gatewaysession-manager-without-igwSecurely connect to an Amazon RDS or Amazon EC2 database instance remotely with your preferred GUIFollow"
https://repost.aws/knowledge-center/systems-manager-ssh-vpc-resources
How do I mount an EFS file system on an ECS container or task running on EC2?
I want use Amazon Elastic File System (Amazon EFS) with Amazon Elastic Container Service (Amazon ECS) container or tasks using an Amazon Elastic Compute Cloud (Amazon EC2) launch type. How can I do this?
"I want use Amazon Elastic File System (Amazon EFS) with Amazon Elastic Container Service (Amazon ECS) container or tasks using an Amazon Elastic Compute Cloud (Amazon EC2) launch type. How can I do this?Short descriptionYou can mount an EFS file system on a task or container running on an EC2 instance. To do this, create a task definition that provides the file system ID in the volume task definition parameters. This allows the EFS file system to automatically mount to the tasks that you specify in your task definition.Required resources:Amazon ECS cluster (EC2 launch type)Amazon EFS file systemResolutionNetwork requirementsThe EFS file system and ECS cluster must be in the same VPC.The security groups associated with the EFS file system must allow inbound connections on port 2049 (network file system, or NFS) from the ECS container instance and the ECS task.Security groups of the ECS instance or tasks must allow outbound connections on port 2049 to the EFS file system's security group.Create a task definition1.    Open the Amazon ECS console and select Task Definitions, Create new Task Definition.2.    Choose EC2 for the launch type compatibility, then select Next step.3.    In Configure task and container definitions, enter a name for your task definition.4.    In the Volume section, choose Add volume.5.    Enter the name of the volume, and then select EFS from the Volume types drop down menu.6.    For the File system ID, select the ID of the file system to use with the ECS tasks.7.    (Optional) Specify the Root directory, Encryption in transit, and EFS IAM authorization if needed based on your requirements. If no options are modified, then the default root directory "/" is used.8.    Select Add.9.    While creating the container, under Container definitions, select Add container to use the previously created volume. Then, under Storage and Logging in the Mount points sub-section, select the volume that you created in step 4.10.    For container path, choose the directory path within the container for your application, and then choose Add.11.    Complete the remaining required fields in the task definition wizard and then choose Create.In the following example, the task definition creates a data volume named efs-ec2-test. The nginx container mounts the host data volume at the /usr/share/nginx/html path.{ "containerDefinitions": [ { "memory": 128, "portMappings": [ { "hostPort": 80, "containerPort": 80, "protocol": "tcp" } ], "essential": true, "mountPoints": [ { "containerPath": "/usr/share/nginx/html", "sourceVolume": "efs-ec2-test" } ], "name": "nginx", "image": "nginx" } ], "volumes": [ { "name": "efs-ec2-test", "efsVolumeConfiguration": { "fileSystemId": "fs-1324abcd", "transitEncryption": "ENABLED" } } ], "family": "efs-test"}Note: Replace the fileSystemid, containerPath, and other task definition parameters based on the values for your custom configuration.In the preceding example, you can create a sample index.html file in the file system's root directory with the following content:<html> <body> <h1>You are using an Amazon EFS file system for persistent container storage.</h1> </body></html>Run an ECS task1.    Run the ECS task using the task definition created earlier.2.    Make sure that the EFS file system mounts successfully to the EC2 container by accessing the website using the ECS instance's public IP address.Follow"
https://repost.aws/knowledge-center/efs-mount-on-ecs-container-or-task
How can I troubleshoot high latency on DynamoDB Accelerator (DAX) clusters?
My read or write requests in Amazon DynamoDB Accelerator (DAX) experience high latency. How do I troubleshoot this?
"My read or write requests in Amazon DynamoDB Accelerator (DAX) experience high latency. How do I troubleshoot this?ResolutionThere are multiple reasons why you might receive latency in your requests. Refer to each of the potential issues below to troubleshoot your latency.The cluster or node is experiencing high loadLatency is often caused by a cluster or node that's experiencing a high load on the DAX cluster. This latency can be impacted further if you have your client configured to a single node URL instead of the cluster URL. In this case, if the node is suffering any issue during a high load, then the client requests suffer latency or throttling.To resolve latency and throttling caused by a high load on single clusters or nodes, use horizontal scaling or vertical scaling.Misconfiguration in the DAX clientIf you lower the withMinIdleConnectionSize parameter, then latency across the DAX cluster is likely to increase. This parameter sets the minimum number of idling connections with the DAX cluster. For every request, the client will use an available idle connection. If a connection isn't available, then the client establishes a new one. For example, if the parameter is set to 20, then there is a minimum of 20 idle connections with the DAX cluster.The client maintains a connection pool. When an application makes an API call to DynamoDB or DAX, the client leases a connection from the connection pool. Then, the client makes the API call and returns the connection to the pool. However, the connection pool has an upper limit. If you make a large number of API calls to DAX at once, then they might exceed the limit of the connection pool. In this case, some requests must wait for other requests to complete before obtaining leases from the connection pool. This results in requests queuing up at the connection pool level. As a result, the application experiences an increase in round-trip latency.Therefore, to decrease periodic traffic spikes in your application, adjust the parameters setMinIdleConnectionSize, getMinIdleConnectionSize, and withMinIdleConnectionSize. These parameters play a key role in the latency of a DAX cluster. Configure them for your API calls so that DAX uses an appropriate number of idling connections without the need to reestablish new connections.Missed items in the cacheIf a read request misses an item, then DAX sends the request to DynamoDB. DynamoDB processes the requests using eventually consistent reads and then returns the items to DAX. DAX stores them in the item cache and then returns them to the application. Latency in the underlying DynamoDB table can cause latency in the request.Cache misses commonly happen for two reasons:1.    Strongly consistent reads: Strongly consistent reads for the same item aren't cached by DAX. This results in a cache miss because the entries bypass DAX and are retrieved from the DynamoDB table itself. You can use eventually consistent reads to solve this issue, but note that DynamoDB must first read the data for the data to be cached.2.    Eviction policy in DAX: Queried data that's already evicted from the cache results in a miss. DAX uses three different values to determine cache evictions:DAX clusters use a Least Recently Used (LRU) algorithm to prioritize items. Items with the lowest priority are evicted when the cache is full.DAX uses a Time-to-Live (TTL) value for the period of time that items are available in the cache. After an item's TTL value is exceeded, the item is evicted.Note: If you're using the default TTL value of five minutes, then check to see if you're querying the data after the TTL time.DAX uses write-through functionality to evict older values as new values are written. This helps keep the DAX item cache consistent with the underlying data store, using a single API call.To extend the TTL value of your items, see Configuring TTL settings.Note: You can't modify a parameter group while it's in use in a running DAX instance.Cache misses can also occur when maintenance patching is applied to a DAX cluster. Use multiple node clusters to reduce this downtime.Maintenance windowsLatency might occur during the weekly maintenance window, especially if there are software upgrades, patches, or system changes to the cluster's nodes. In most cases, requests are handled successfully by other available nodes that aren't undergoing maintenance. A cluster with a high number of requests during heavy maintenance can experience failure.To reduce chances of latency or failure, configure the maintenance window to your off-peak hour. Do so allows the cluster to upgrade during a period of lighter request load.Latency in the DynamoDB tableWith write operations, data is first written to the DynamoDB table and then to the DAX cluster. The operation is successful only if the data is successfully written to both the table and to DAX. Latency in the underlying DynamoDB table can cause latency in the request. To reduce this latency, see How can I troubleshoot high latency on an Amazon DynamoDB table?To further configure DynamoDB to your application's latency requirements, see Tuning AWS Java SDK HTTP request settings for latency-aware Amazon DynamoDB applications.Request timeout periodThe parameter setIdleConnectionTimeout determines the timeout period for idle connections, and setConnectTimeout determines the timeout period for connections with the DAX cluster. These two parameters deal with timeouts of the connection pools, which can affect the latency of your cluster.Configure the request timeout for connections with the DAX cluster by adjusting the setRequestTimeout parameter. For more information, see setRequestTimeout in the DAX documentation.It's also a best practice to use exponential backoff retries, which reduce request errors and also operational costs.Note: DAX doesn't support latency of the cluster in CloudWatch Metrics.Follow"
https://repost.aws/knowledge-center/high-latency-dax-clusters
How do I check if resource record sets in my Route 53 public hosted zone are accessible from the internet?
I created a public hosted zone in Amazon Route 53 and added resource record sets in it. How do I verify that my resource record sets are reachable from the internet?
"I created a public hosted zone in Amazon Route 53 and added resource record sets in it. How do I verify that my resource record sets are reachable from the internet?Short descriptionCheck whether your resource record sets are accessible from the internet using one of the following methods:The Route 53 checking toolThe dig tool (for Linux, Unix, or Mac)The nslookup tool (for Windows)Note: The steps in this article verify that the public hosted zone is created successfully and accessible. If you want your entire domain resolvable, then verify the following:Update the domain registration to use Amazon Route 53 name serversUpdate the NS records to use Route 53 name serversResolutionMethod 1: Use the Route 53 checking toolUse the Route 53 checking tool to see how Route 53 responds to DNS queries.Method 2: Use the dig tool (for Linux, Unix, or Mac)1.    Find the four authoritative name servers for your public hosted zone.2.    In your resource record set’s configuration, find the associated domain name (Name), record type (Type), and value (Value).3.    Query one of the authoritative name servers. In your command line argument, specify the authoritative name server and the resource record set's domain name and record type. For example:$ dig @ns-###.awsdns-##.com mailserver1.example.com MX$ dig @ns-###.awsdns-##.com _text_.example.com TXT$ dig @ns-###.awsdns-##.com cname.example.com CNAME$ dig @ns-###.awsdns-##.com subdomain.example.com NS$ dig @ns-###.awsdns-##.com www.example.com ANote: The syntax for dig varies between Linux distributions. Use man dig to find the correct syntax for your particular distribution.4.    Review the output and verify that the ANSWER SECTION matches your resource record set.For example, if:Record name = mailserver1.example.comType = MXValue = inbound-smtp.mailserver1.example.comthen the correct dig output is:;; ANSWER SECTION:MAILSERVER1.EXAMPLE.COM 300 IN MX 10 inbound-smtp.mailserver1.example.com.Method 3: Use the nslookup tool (for Windows)1.    Open the Windows Command Prompt.2.    Run the following command: nslookup. The output looks similar to this:C:\Users\Administrator>nslookupDefault Server: ip-172-31-0-2.ap-southeast-2.compute.internalAddress: 172.31.0.23.    Specify the resource record set type using set type=A:Note: You can also add any other required resource record type.set type=A4.    Specify one of the Route 53 name servers (NS) from the hosted zone (HZ) to query. In this example, enter server ns-1276.awsdns-31.org. The output looks similar to this:server ns-1276.awsdns-31.orgDefault Server: ns-1276.awsdns-31.orgAddresses: 2600:9000:5304:fc00::1205.251.196.2525.    Enter the record to query. For example, "aws.amazondomains.com". The query is done against the server specified earlier:aws.amazondomains.comServer: ns-1276.awsdns-31.orgAddresses: 2600:9000:5304:fc00::1205.251.196.2526.    The response is returned by the Route 53 NS:Name: aws.amazondomains.comAddress: 1.1.1.1Related informationChecking DNS responses from Route 53Follow"
https://repost.aws/knowledge-center/route-53-reachable-resource-record-sets
How do I edit my Amazon SNS topic's access policy?
I want to allow other AWS Identity and Access Management (IAM) entities to access to my Amazon Simple Notification Service (Amazon SNS) topic. How do I edit my Amazon SNS topic's access policy to grant the required permissions?
"I want to allow other AWS Identity and Access Management (IAM) entities to access to my Amazon Simple Notification Service (Amazon SNS) topic. How do I edit my Amazon SNS topic's access policy to grant the required permissions?ResolutionTo edit your Amazon SNS topic's access policy using the Amazon SNS console1.    Open the Amazon SNS console.2.    In the left navigation pane, choose Topics.3.    Choose your Amazon SNS topic's name.4.    Choose the Edit button.5.    Expand the Access policy - optional section.6.    Edit the access policy to grant the required permissions for your use case.Note: For more information on how to write access policies, see Overview of managing access in Amazon SNS.7.    Choose Save Changes.To edit your Amazon SNS topic's access policy using the AWS CLINote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.To modify, add, or remove permissions, run the following SetTopicAttributes command:Important: Replace <TopicARN> with your topic's Amazon Resource Name (ARN). Replace testpolicy.json with the path to your policy document.aws sns set-topic-attributes --topic-arn '<TopicARN>' --attribute-name 'Policy' --attribute-value file://testpolicy.json-or-To add permissions only, run the following AddPermission command:Important: Replace <TopicARN> with your topic's ARN. Replace AllowProdAccountsXXX with a unique identifier for the new policy statement. Replace AWS Account ID with the account IDs of the IAM entities that you want to allow access to specific actions. Replace Publish and Subscribe with the list of actions that you want to allow for the specified IAM entities.sns add-permission --topic-arn '<TopicARN>' --label 'AllowProdAccountsXXX' --aws-account-id 'AWS Account ID' --action-name 'Publish' 'Subscribe'Related informationExample cases for Amazon SNS access controlActions, resources, and condition keys for Amazon SNSFollow"
https://repost.aws/knowledge-center/sns-edit-topic-access-policy
Why can't I delete an AWS Config rule?
"I can't delete my AWS Config rule, or I receive an error similar to the following:"An error has occurred with AWS Config.""
"I can't delete my AWS Config rule, or I receive an error similar to the following:"An error has occurred with AWS Config."ResolutionTo troubleshoot this issue, check the following:The AWS Identity and Access Management (IAM) entity has permissions for the DeleteConfigRule API actionOpen the IAM console, and then in the navigation pane choose Users or Roles.Choose the user or role that you used to delete the AWS Config rule, and expand Permissions policies.In the Permissions tab, choose JSON.In the JSON preview pane, confirm that the IAM policy allows permissions for the DeleteConfigRule API action.The IAM entity permission boundary allows the DeleteConfigRule API actionIf the IAM entity has a permissions boundary, be sure that it allows the DeleteConfigRule API action.Open the IAM console, and then in the navigation pane choose Users or Roles.Choose the user or role that you used to delete the AWS Config rule, expand Permissions boundary, and then choose JSON.In the JSON preview pane, confirm that the IAM policy allows permissions for the DeleteConfigRule API action.The service control policy (SCP) allows the DeleteConfigRule API actionOpen the AWS Organizations console using the management account for the organization.In Account name, choose the AWS account.In Policies, expand Service control policies and note the SCP policies that are attached.At the top of the page, choose Policies.Select the policy, and then choose View details.In the JSON preview pane, confirm that the policy allows the DeleteConfigRule API action.The rule isn't a service-linked ruleWhen you enable a security standard, AWS Security Hub creates AWS Config service-linked rules for you. You can't delete these service-linked rules using AWS Config, so the delete button is grayed out. To remove the AWS Config service-linked rules, see Disabling a security standard.No remediation actions are in progressYou can't delete AWS Config rules that have remediation actions in progress. Follow the instructions to delete the remediation action that is associated with that rule. Then, try deleting the AWS Config rule again.Important: Delete only remediation actions that are in failed or successful states.If the remediation action fails to delete, see How can I resolve the error "NoSuchRemediationConfigurationException" or "unexpected internal error" when trying to delete a remediation action in AWS Config?Related informationPermissions boundaries for IAM entitiesWhat's the difference between an AWS Organizations service control policy and an IAM policy?Service-linked AWS Config rulesManaging your AWS Config rulesFollow"
https://repost.aws/knowledge-center/delete-config-rule
Why is my Reserved Instance showing as "Retired" in the console?
"I'm not getting the billing benefit of a Reserved Instance (RI), and the RI is displayed as "Retired" in the console."
"I'm not getting the billing benefit of a Reserved Instance (RI), and the RI is displayed as "Retired" in the console.ResolutionRIs appear as retired in the Billing and Cost Management console for the following reasons:The RI term expired. After the term expires, you no longer receive the billing benefit or capacity reservation, and the RI is marked as retired in the console.The upfront charge for the RI wasn't processed successfully, and a new billing period has started. If the upfront charge for the RI isn't processed successfully, the charge remains in the payment-pending state until it's retried successfully. If the charge is in the payment-pending state and a new billing period starts, then the RI moves to the retired state, and the upfront charge can't be retried.The RI was modified. When an RI is modified, the old RI is marked as retired in the console.You sold the RI on the RI Marketplace. After the RI is sold, it is marked as retired in the console.RIs in the retired state can't be activated, but you can purchase a new RI with the same configuration.Related informationAmazon EC2 Reserved InstancesAmazon RDS Reserved InstancesFollow"
https://repost.aws/knowledge-center/ec2-ri-retired
Why isn't CloudFront serving my domain name over HTTPS?
"I associated an SSL certificate with my Amazon CloudFront distribution, but I can't access my domain name over HTTPS. Why?"
"I associated an SSL certificate with my Amazon CloudFront distribution, but I can't access my domain name over HTTPS. Why?ResolutionTo resolve problems with accessing your domain name over HTTPS, check the following:Your SSL certificate's domain name must be added as an alternate domain name (CNAME) in your CloudFront distribution's settings. For more information, see Using custom URLs for files by adding alternate domain names (CNAMEs).The domain name of the SSL certificate must be consistent with the domain name associated with the CloudFront distribution. For example, if you issue an SSL certificate for *.example.com, then the CloudFront distribution will support domain names such as abc.example.com or 123.example.com. However, an SSL certificate for *.example.com won't support domain names such as abc.123.example.com. To use abc.123.example.com as a domain name, you need an SSL certificate for either *.123.example.com or abc.123.example.com.If you're getting cipher or TLS version mismatch errors, verify that your client is using supported SSL or TLS protocols and ciphers. This allows communication between viewers and CloudFront.Verify that the status of your CloudFront distribution is Deployed. If the status is still InProgress, then you might not be able to access the domain name because data is still propagating across edge locations.If you recently updated your SSL certificate on AWS Certificate Manager, then verify that the certificate renewal status is Success. It might take several hours for the certificate renewal process to complete. For more information, see I renewed my Amazon-issued SSL certificate or reimported my certificate to ACM. Why does CloudFront still show the old certificate?For more information on troubleshooting SSL errors, see SSL/TLS negotiation failure between CloudFront and a custom origin server.Related informationUsing HTTPS with CloudFrontFollow"
https://repost.aws/knowledge-center/cloudfront-domain-https
How can I copy data to and from Amazon EFS in parallel to maximize performance on my EC2 instance?
I have a large number of files to copy or delete. How can I run these jobs in parallel on an Amazon Elastic File System (Amazon EFS) file system on my Amazon Elastic Compute Cloud (Amazon EC2) instance?
"I have a large number of files to copy or delete. How can I run these jobs in parallel on an Amazon Elastic File System (Amazon EFS) file system on my Amazon Elastic Compute Cloud (Amazon EC2) instance?Short descriptionUse one of the following tools to run jobs in parallel on an Amazon EFS file system:GNU parallel – For more information, see GNU Parallel on the GNU Operating System website.msrsync – For more information, see msrsync on the GitHub website.fpsync – For more information, see fpsyncon the Ubuntu manuals website.ResolutionGNU parallel1.    Install GNU parallel.For Amazon Linux and RHEL 6:$ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm$ sudo yum install parallel nload -yFor RHEL 7:$ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm$ sudo yum install parallel nload -yFor Amazon Linux 2:$ sudo amazon-linux-extras install epel$ sudo yum install nload sysstat parallel -yFor Ubuntu:$ sudo apt-get install parallel2.    Use rsync to copy the files to Amazon EFS.$ sudo time find -L /src -type f | parallel rsync -avR {} /dstor$ sudo time find /src -type f | parallel -j 32 cp {} /dst3.    Use the nload console application to monitor network traffic and bandwidth.$ sudo nload -u Mmsrsyncmsrsync is a Python wrapper for rsync that runs multiple rsync processes in parallel.Note: msrsync is compatible only with Python 2. You must run the msrsync script using Python version 2.7.14 or later.1.    Install msrsync.$ sudo curl -s https://raw.githubusercontent.com/jbd/msrsync/master/msrsync -o /usr/local/bin/msrsync && sudo chmod +x /usr/local/bin/msrsync2.    Use the-p option to specify the number of rsync processes that you want to run in parallel. ReplaceX with the number of rsync processes. The P option shows the progress of each job.$ sudo time /usr/local/bin/msrsync -P -p X --stats --rsync "-artuv" /src/ /dst/fpsyncThe fpsync tool synchronizes directories in parallel using fpart and rsync. It can run several rsync processes locally or launch rsync transfers on several nodes (workers) through SSH.For more information on fpart, see fpart on the Ubuntu manuals website.1.    Enable the EPEL repository, and then install the fpart package.For Amazon Linux and RHEL 6:$ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm$ sudo yum install fpart -yFor RHEL 7:$ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm$ sudo yum install fpart -yFor Amazon Linux 2:$ sudo amazon-linux-extras install epel$ sudo yum install fpart -yFor Ubuntu:$ sudo apt-get install fpartNote: In Ubuntu, fpsync is part of the fpart package.2.    Use fpsync to synchronize the /dst and /src directories. Replace X with the number of rsync processes that you want to run in parallel.$ sudo fpsync -n X /src /dstFollow"
https://repost.aws/knowledge-center/efs-copy-data-in-parallel
Why are my mobile text message (SMS) charges higher than expected from Amazon SNS and Amazon Pinpoint?
"I was charged more than I expected for mobile text messaging (SMS) through Amazon Simple Notification Service (Amazon SNS) and Amazon Pinpoint. Why are my SMS charges higher than expected? Also, what's the best way to calculate my required SMS spending quota for Amazon SNS and Amazon Pinpoint?"
"I was charged more than I expected for mobile text messaging (SMS) through Amazon Simple Notification Service (Amazon SNS) and Amazon Pinpoint. Why are my SMS charges higher than expected? Also, what's the best way to calculate my required SMS spending quota for Amazon SNS and Amazon Pinpoint?ResolutionAmazon SNS and Amazon Pinpoint SMS charges can be higher than expected if messages exceed the message parts per second (MPS) limits.For more information, see SMS character limits in Amazon Pinpoint. Also, Publishing to a mobile phone in the Amazon SNS Developer Guide.To calculate your required SMS spending quota for Amazon SNS and Amazon PinpointFollow the instructions in the Calculate your required SMS spending quota section of the following article: How do I request a spending limit increase for SMS messages in Amazon SNS?Note: For Amazon SNS, you can subscribe to daily SMS usage reports to monitor your SMS deliveries. For Amazon Pinpoint, you can activate event streams to Amazon Kinesis to monitor your SMS deliveries. For more information, see SMS events in the Amazon Pinpoint Developer Guide.Related informationAmazon Pinpoint pricingWorldwide SMS pricingFollow"
https://repost.aws/knowledge-center/sns-pinpoint-high-sms-charges
How do I increase the available disk space on my Amazon ECS container instances if I launched my cluster manually with an Auto Scaling group?
How do I increase the available disk space on my Amazon Elastic Container Service (Amazon ECS) container instances if I launched my Amazon ECS cluster manually with an Auto Scaling group?
"How do I increase the available disk space on my Amazon Elastic Container Service (Amazon ECS) container instances if I launched my Amazon ECS cluster manually with an Auto Scaling group?Short descriptionTo increase the storage space on your container instances, you must update the launch configuration or launch template to increase the volume size of your Amazon Elastic Block Store (Amazon EBS). Then, replace your original instances with new instances from your Auto Scaling group.To increase a container instance's storage space through this method, complete the steps below. If you launched your container instances using another method, then skip this article and complete the steps in one of the following articles:How do I increase the available disk space on my Amazon ECS container instances if I launched my ECS cluster from the AWS Management Console?How do I increase the available disk space on my Amazon ECS container instances if I launched my container instances as standalone Amazon EC2 instances?Note: Your Amazon EBS volume configuration varies depending on the Amazon ECS-optimized Amazon Machine Image (AMI) that you're using. For more information and commands on how to check the available space on your instances, see AMI storage configuration.Important: The following steps terminate the original container instances in your Amazon ECS cluster. Any data that's stored on the EBS volumes for those instances is lost when you complete these procedures.ResolutionImportant: To avoid downtime for your Amazon ECS services, you must launch replacement instances before draining your original container instances. After all the tasks stop on the original container instances, confirm that the tasks on the replacement instances started, and then terminate the original container instances.First, create an Auto Scaling group with either a launch template or launch configuration.Note: To use the latest features from Amazon Elastic Compute Cloud (Amazon EC2), it's a best practice to use launch templates instead of configurations.(Option 1) Create a new Auto Scaling group from a launch template1.    Open the Amazon EC2 console.2.    From the navigation pane, choose Auto Scaling Groups. In the Launch template/configuration column, note the name of the launch template for any ECS container instance where you want to increase disk space.3.    From the navigation pane, choose Launch Templates.4.    Select the launch template for your existing ECS container instance, choose Actions, and then choose Modify template (Create new version).5.    Under Storage (volumes), expand the details for the EBS volume and enter a value for Size (GiB).Note: For more information on volume options, see Block device mappings.6.    Choose Create template version.7.    Under Create an Auto Scaling group from your template, choose Create Auto Scaling group.8.    When creating the Auto Scaling group, make sure that you are using the new version of the template.9.    After your new instances launch, open the Amazon ECS console, and then choose Clusters.10.    To verify that the new instances appear, select your cluster, and then choose the ECS Instances tab.(Option 2) Create a new Auto Scaling group from your original launch configuration1.    Open the Amazon EC2 console.2.    From the navigation pane, choose Auto Scaling Groups. In the Launch template/configuration column, note the name of the launch template/configuration for any ECS container instance where you want to increase disk space.3.    From the navigation pane, choose Launch Configurations.4.    Select the launch configuration for your existing ECS container instance, choose Actions, and then choose Copy launch configuration.5.    To increase the size of your volume, enter a value for Size (GiB).Note: For more information on volume options, see Block device mappings.6.    Choose Create launch configuration.7.    Select the newly created launch configuration, and choose Actions. Then, choose Create Auto Scaling group.8.    After your new instances launch, open the Amazon ECS console, and then choose Clusters.9.    To verify that the new instances appear, select your cluster, and then choose the ECS Instances tab.Drain your original ECS container instances and migrate your containers to new instances1.    Open the Amazon ECS console.2.    Choose the ECS Instances tab, and then select the original container instances.3.    Choose Actions, and then choose Drain Instances.Note: You can drain the previous instances in batches to avoid downtime for your Amazon ECS services. When you drain service tasks for container instances, container instances in the RUNNING state are stopped and replaced according to the service's deployment configuration parameters minimumHealthyPercent and maximumPercent. Any PENDING or RUNNING tasks that don't belong to the service are unaffected. You must wait for these tasks to finish or stop them manually.4.    When the DRAINING instances have 0 running tasks, repeat steps 2 and 3 until all the original container instances are in DRAINING status.5.    Delete the original Auto Scaling group to terminate the original instances.6.    Your tasks are now running on the new instances with more storage available.Related informationContainer instance drainingUsing data volumes in tasksFollow"
https://repost.aws/knowledge-center/ecs-container-storage-increase-autoscale
How do I resolve "EC2 is out of capacity" or "The requested number of instances exceeds your EC2 quota" errors in Amazon EMR?
"My Amazon EMR cluster fails to launch, and I get one of these error messages:"EC2 is out of capacity""The requested number of instances exceeds your EC2 quota""
"My Amazon EMR cluster fails to launch, and I get one of these error messages:"EC2 is out of capacity""The requested number of instances exceeds your EC2 quota"Resolution"EC2 is out of capacity"This error means that AWS doesn't have enough available On-Demand Instance capacity to create the Amazon Elastic Compute Cloud (Amazon EC2) instances that you specified for the EMR cluster. To resolve the issue, try the following:Specify a different instance type for the EMR cluster. A different instance type might have more available capacity.Launch your cluster in a different Availability Zone. Each Availability Zone has its own capacity.Wait a few minutes, and then try to launch the EMR cluster again. Capacity shifts frequently."The requested number of instances exceeds your EC2 quota"This error means that the number of instances that you specified for the EMR cluster exceeds a service quota. To view your Amazon EC2 service quotas, open the Amazon EC2 console and then choose Limits from the navigation pane. Keep the following in mind:Amazon EC2 service quotas are unique to each Region.Only running instances count toward your service quotas.In addition to the limit on the total number of running instances, each instance type has its own limit. For example, you might be limited to 10 running a1.4xlarge instances and 20 total running instances in US East (N. Virginia).If you need more Amazon EC2 resources, request a service quota increase. Requests are subject to review by AWS engineering teams.Related informationInsufficient instance capacityHow do I increase the service quota of my Amazon EC2 resources?Configure Amazon EC2 instancesFollow"
https://repost.aws/knowledge-center/emr-cluster-failed-capacity-quota
Why did I get the error "The snapshot is currently in use by an AMI" when trying to delete my EBS snapshot?
"When I try to delete an Amazon Elastic Block Store (EBS) snapshot, I receive an error similar to the following:"snap-xxxxxxxx: The snapshot snap-xxxxxxxx is currently in use by ami-xxxxxxxx""
"When I try to delete an Amazon Elastic Block Store (EBS) snapshot, I receive an error similar to the following:"snap-xxxxxxxx: The snapshot snap-xxxxxxxx is currently in use by ami-xxxxxxxx"ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.EBS-backed Amazon Machine Images (AMIs) include EBS snapshots. If you try to delete an EBS snapshot associated with an active AMI, you receive this error.Note: Public snapshots can't be deleted. If you try to delete a public snapshot, you receive an "unknown error occurred" message.Before you attempt to delete an EBS snapshot, make sure that the AMI isn’t currently in use. You can use AMIs with a variety of AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2), AWS Auto Scaling, AWS CloudFormation, and more. If you delete an AMI that’s used by another service or application, the function of that service or application might be affected.If you no longer need the EBS snapshot or its associated AMI, deregister the AMI. Then, delete the EBS snapshot in the Amazon EC2 console:Note the AMI ID in the error message.Open the Amazon EC2 console, and from the navigation pane, choose AMIs.Choose the AMI noted in the error message, and then choose Deregister from the Actions menu.Note: If you don’t see the AMI that you’re looking for, check any other AWS Regions that you might have used.Delete the EBS snapshot by using the EC2 console or the AWS CLI.Related informationDeregister your AMIDelete an Amazon EBS snapshotFollow"
https://repost.aws/knowledge-center/snapshot-in-use-error
Why aren’t my stack-level tags propagating to resources in my CloudFormation stack?
My stack-level tags aren't propagating to resources in my AWS CloudFormation stack.
"My stack-level tags aren't propagating to resources in my AWS CloudFormation stack.Short descriptionPropagation of stack-level tags to resources can vary by resource. CloudFormation supports propagation of stack-level tags only for resources with the Tags property. For a list of AWS resources and their property types, see AWS resource and property types reference.The following examples demonstrate the difference in stack-level tag propagation between a resource that supports the Tags property, and one that doesn't.ResolutionResource that supports the Tags propertyThe resourceAWS::S3::Bucket supports the Tags property.Create a stack with the AWS::S3::Bucket resource and specify stack-level tags.After the stack is created, the S3 bucket resource has the propagated stack-level tags with the aws: prefix.Resource that doesn't support the Tags propertyAlthough the PutRule API allows you to specify tags, the AWS::Events::Rule resource doesn't support the Tags property.Create a stack with the AWS::Events::Rule resource and specify stack-level tags.After the stack is created, the Events Rule resource doesn't have the propagated stack-level tags.Search for or create an issue through GitHubIf a stack-level tag isn't propagating for a resource that supports the Tags property, then check the cloudformation-coverage-roadmap on the GitHub website to see if it's a known issue. If it isn't submitted as an issue, then choose New issue to create one.Follow"
https://repost.aws/knowledge-center/cloudformation-propagate-stack-level-tag
How can I set the default printer on Amazon Windows WorkSpaces and prevent the settings from reverting?
"I use Amazon Windows WorkSpaces, and I want to set a default printer for a WorkSpace. How can I do that?-or-I set a default printer for my Windows WorkSpace, but the settings don’t persist after I log off and back in to the WorkSpace. How can I fix this?"
"I use Amazon Windows WorkSpaces, and I want to set a default printer for a WorkSpace. How can I do that?-or-I set a default printer for my Windows WorkSpace, but the settings don’t persist after I log off and back in to the WorkSpace. How can I fix this?ResolutionSet a default printerDownload and install the driver for your local printer on the WorkSpace.From the WorkSpace, set the desired printer as the default printer, and then set any needed printer preferences.Install the Group Policy administrative template for PCoIP.Configure the local printer redirection.TroubleshootingIf the default printer settings aren’t preserved after logging off and back in to the WorkSpace, set the default printer on the WorkSpace. Then, follow these steps:Export a registry key containing the default printer settingFrom the Windows Start menu, choose Run.Enter Regedit, and then choose Ok.From the left navigation pane, expand HKEY_Current_User, Software, Microsoft, Windows NT, and CurrentVersion.Under CurrentVersion, open the context (right-click) menu for Windows, and then choose Export.Save the file to your desktop and name the file. Note the file name for a later step.Create a scheduled taskThe following process triggers a registry import when a user logs on.Note: The trigger might not work if users disconnect without logging off. If this happens, the user must log off, and then log back in.Open a command prompt, and then run taskschd.msc to open Task Scheduler.From the Actions panel, choose Create Basic Task.For Name, enter a task name, and then choose Next.For Trigger, select When I log on, and then choose Next.For Action, select Start a program, and then choose Next.For Program/script, enter the following, replacing username and filename.reg with your own values:regedit.exe /s D:\Users\username\Desktop\filename.regNote: Use the file name that you created when exporting the registry key.Choose Next, and then choose Yes.Select Open the Properties dialog for this task when I click Finish, and then choose Finish.On the General tab, for Security options, select Run with highest privileges.On the Triggers tab, select the At log on trigger, and then choose Edit.For Advanced settings, select Delay task for, change the value to 1 minute, and then choose Ok.On the Actions tab, select the Start a program action, and then choose Edit.For Start in (optional), enter c:\windows, and then choose Ok.Choose Ok to close the properties window.To confirm that the default printer settings persist, log off, and then log on again. It can take about a minute after logging on for the default printer to display correctly.Related informationPrint from a WorkSpaceFollow"
https://repost.aws/knowledge-center/workspaces-default-printer
How can I copy large amounts of data from Amazon S3 into HDFS on my Amazon EMR cluster?
I want to copy a large amount of data from Amazon Simple Storage Service (Amazon S3) to my Amazon EMR cluster.
"I want to copy a large amount of data from Amazon Simple Storage Service (Amazon S3) to my Amazon EMR cluster.Short descriptionUse S3DistCp to copy data between Amazon S3 and Amazon EMR clusters. S3DistCp is installed on Amazon EMR clusters by default. To call S3DistCp, add it as a step at launch or after the cluster is running.ResolutionTo add an S3DistCp step to a running cluster using the AWS Command Line Interface (AWS CLI), see Adding S3DistCp as a step in a cluster.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.To add an S3DistCp step using the console, do the following:1.    Open the Amazon EMR console, and then choose Clusters.2.    Choose the Amazon EMR cluster from the list, and then choose Steps.3.    Choose Add step, and then choose the following options:For Step type, choose Custom JAR.For Name, enter a name for the S3DistCp step.For JAR location, enter command-runner.jar. For more information, see Run commands and scripts on an Amazon EMR cluster.For Arguments, enter options similar to the following: s3-dist-cp --src=s3://s3distcp-source/input-data --dest=hdfs:///output-folder1.For Action on failure, choose Continue.4.    Choose Add.5.    When the step Status changes to Completed, verify that the files were copied to the cluster:$ hadoop fs -ls hdfs:///output-folder1/Note: It's a best practice to aggregate small files into fewer large files using the groupBy option and then compress the large files using the outputCodec option.TroubleshootingTo troubleshoot problems with S3DistCp, check the step and task logs.1.    Open the Amazon EMR console, and then choose Clusters.2.    Choose the EMR cluster from the list, and then choose Steps.3.    In the Log files column, choose the appropriate step log:controller: Information about the processing of the step. If your step fails while loading, then you can find the stack trace in this log.syslog: Logs from non-Amazon software, such as Apache and Hadoop.stderr: Standard error channel of Hadoop while it processes the step.stdout: Standard output channel of Hadoop while it processes the step.If you can't find the root cause of the failure in the step logs, check the S3DistCp task logs:1.    Open the Amazon EMR console, and then choose Clusters.2.    Choose the EMR cluster from the list, and then choose Steps.3.    In the Log files column, choose View jobs.4.    In the Actions column, choose View tasks.5.    If there are failed tasks, choose View attempts to see the task logs.Common errorsReducer task fails due to insufficient memory:If you see an error message similar to the following in the step's stderr log, then the S3DistCp job failed because there wasn't enough memory to process the reducer tasks:Container killed on request. Exit code is 143Container exited with a non-zero exit code 143Container [pid=19135,containerID=container_1494287247949_0005_01_000003] is running beyond virtual memory limits. Current usage: 569.0 MB of 1.4 GB physical memory used; 3.0 GB of 3.0 GB virtual memory used. Killing container.To resolve this problem, use one of the following options to increase memory resources for the reducer tasks:Increase the yarn.nodemanager.vmem-pmem-ratio or mapreduce.reduce.memory.mb parameter in the yarn-site.xml file on the cluster's master node.Add more Amazon Elastic Compute Cloud (Amazon EC2) instances to your cluster.Amazon S3 permission error:If you see an error message similar to the following in the step's stderr log, then the S3DistCp task wasn't able to access Amazon S3 because of a permissions problem:Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: REQUEST_IDTo resolve this problem, see Permissions errors.Related informationView log filesFollow"
https://repost.aws/knowledge-center/copy-s3-hdfs-emr
How can I grant a user Amazon S3 console access to only a certain bucket or folder?
"I want to grant a user Amazon Simple Storage Service (Amazon S3) console access to a bucket or folder (prefix). However, I don't want the user to see other buckets in the account or other folders within the bucket."
"I want to grant a user Amazon Simple Storage Service (Amazon S3) console access to a bucket or folder (prefix). However, I don't want the user to see other buckets in the account or other folders within the bucket.Short descriptionChange a user's AWS Identity and Access Management (IAM) permissions to limit the user's Amazon S3 console access to a certain bucket or folder (prefix):1.    Remove permission to the s3:ListAllMyBuckets action.2.    Add permission to s3:ListBucket only for the bucket or folder that you want the user to access. To allow the user to upload and download objects from the bucket or folder, you must also include s3:PutObject and s3:GetObject.Resolution1.    Open the IAM console.2.    Select the IAM user or role that you want to restrict access to.3.    In the Permissions tab of the IAM user or role, expand each policy to view its JSON policy document.4.    In the JSON policy document, search for the policy that grants the user permission to the s3:ListAllMyBuckets action or to s3:* actions (all S3 actions).5.    Modify the policy to remove permission to the s3:ListAllMyBuckets action.Note: If an attached user policy allows s3:* or Full Admin access with the "*" resource, then the policy includes the s3:ListAllMyBuckets permissions. Remove the "*" resource. Then, use one of the following example policies.6.    Add permission to s3:ListBucket only for the bucket or folder that you want the user to access from the console.The following example policy is for access to an S3 bucket:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET" }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject" ], "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" } ]}The policy allows the user to perform the s3:ListBucket, s3:PutObject, and s3:GetObject actions only on DOC-EXAMPLE-BUCKET.The following example policy grants access to a folder. The policy allows the user to perform the s3:ListBucket, s3:ListBucketVersions, s3:PutObject, s3:GetObject, and s3:GetObjectVersion actions only on folder2 within DOC-EXAMPLE-BUCKET. Use s3:ListBucketVersions, s3:GetObjectVersion and s3:GetBucketVersioning only if the bucket has versioning, and you want users to have access to prior versions of objects.{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowUsersToAccessFolder2Only", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET/folder1/folder2/*" ] }, { "Sid": "AllowListOfBucketOnlyOnPrefix", "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:ListBucketVersions" ], "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET" ], "Condition": { "StringLike": { "s3:prefix": [ "folder1/folder2/*" ] } } }, { "Sid": "AllowListVersionOnObjectDetails", "Effect": "Allow", "Action": [ "s3:GetBucketVersioning" ], "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET" ] } ]}7.    Provide the user with a direct console link to the S3 bucket or folder.Warning: After you change these permissions, the user gets an Access Denied error when they access the main Amazon S3 console. The user must use a direct console link to access the bucket or folder. The following link is an example of a direct console link to an S3 bucket:https://s3.console.aws.amazon.com/s3/buckets/DOC-EXAMPLE-BUCKET/The following link is an example of a direct console link to a folder:https://s3.console.aws.amazon.com/s3/buckets/DOC-EXAMPLE-BUCKET/folder1/folder2/Related informationUser policy examplesFollow"
https://repost.aws/knowledge-center/s3-console-access-certain-bucket
How do I shut down my Amazon Lightsail resources?
How do I remove the Amazon Lightsail resources from my AWS account?
"How do I remove the Amazon Lightsail resources from my AWS account?ResolutionTo remove all your Lightsail resources, delete your Lightsail instances and resources attached to these instances, such as static IP addresses, snapshots, or block storage.Lightsail resources are billed incrementally in hours or in fractions of GB-months. When all Lightsail resources are deleted, you receive no further billing related to Lightsail. For fees incurred for various resources, see Frequently Asked Questions in Amazon Lightsail.To delete your Lightsail resources, do the following:Delete your Amazon Lightsail instance - You stop incurring charges for the instance as soon as it's deleted.Delete a static IP in Amazon Lightsail - Static IP addresses are free while attached to an instance. Note that other resources that rely on this static IP might be impacted, such as DNS records that reference your static IP address.Delete your database in Lightsail - You stop incurring charges for the database as soon as it's deleted.Delete your domain's DNS zone in Lightsail - You stop incurring charges for your DNS as soon as you delete your Lightsail DNS zone.Delete disk snapshots in Lightsail - You need to retain only the most recent snapshot to restore an entire disk. You can reduce charges by removing outdated snapshots, or prevent charges by deleting all snapshots.Detach and delete your block storage disk in Lightsail - You must stop, or delete, your instance before you detach and delete your disk.Delete a Lightsail load balancer - Deleting a load balancer also detaches any Lightsail instances attached to it, but doesn't delete the Lightsail instances. If you activated encrypted (HTTPS) traffic using an SSL/TLS certificate, then deleting the load balancer also deletes any SSL/TLS certificates associated with the load balancer.Delete an SSL/TLS certificate - You can delete an SSL/TLS certificate that you no longer use. For example, your certificate might be expired, and you attached an updated certificate that's validated.Turn off VPC peering that you no longer need - To turn off your VPC peering, do the following:In the Lightsail console, choose Account on the navigation bar.Choose Advanced.In the VPC peering section, clear Enable VPC peering for all AWS Regions.Related informationHow do I change my Lightsail plan?Follow"
https://repost.aws/knowledge-center/shut-down-lightsail
How do I update my EBS volume in CloudFormation without EC2 instances being replaced?
I want to update my Amazon Elastic Block Store (Amazon EBS) volume in AWS CloudFormation without Amazon Elastic Compute Cloud (Amazon EC2) instances being replaced.
"I want to update my Amazon Elastic Block Store (Amazon EBS) volume in AWS CloudFormation without Amazon Elastic Compute Cloud (Amazon EC2) instances being replaced.Short descriptionAs a best practice, use the AWS::EC2::Volume resource type to prevent instance replacement when updating EBS volumes in CloudFormation.Instance replacement occurs when you specify volumes in the BlockDeviceMappings property of the AWS::EC2::Instance and AWS::EC2::Template resource types. In this case, you must add a Retain DeletionPolicy attribute.Prerequisites: If you modify the volume from gp2 to gp3, then make sure that the volume that's attached to the instance is modified to gp3. Also, make sure that the instance isn't in the Optimizing or Modifying states. Before you modify the volume to gp3, check what the limitations are.Important: Before resolving the issue, take a snapshot of the volumes to create a backup of critical workloads.Resolution1.    Add the Retain DeletionPolicy to the CloudFormation stack for the instance that you want to update the volume, and then update the stack:AWSTemplateFormatVersion: '2010-09-09'Resources: Myinstance: Type: AWS::EC2::Instance DeletionPolicy: Retain Properties: BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeType: gp2 VolumeSize: 10 DeleteOnTermination: true EbsOptimized: false ImageId: ami-064ff912f78e3e561 InstanceInitiatedShutdownBehavior: stop InstanceType: t2.micro Monitoring: false2.    Update the CloudFormation stack again by removing the instance from the template. Note: If you have only one resource in your template, then you must create a stand-in resource, such as another instance. You can delete the resource from the template after you finished.3.    Modify the EBS volume attributes to your requirements.4.    Import the instance back into the CloudFormation stack.To import the instance back into the CloudFormation stack:1.    Open the AWS CloudFormation console.2.    On the stack page, choose Stack actions and then choose Import resources into stack.3.    Update the template:AWSTemplateFormatVersion: '2010-09-09'Resources: Myinstance: Type: AWS::EC2::Instance DeletionPolicy: Retain Properties: BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeType: gp3 VolumeSize: 100 DeleteOnTermination: true EbsOptimized: false ImageId: ami-064ff912f78e3e561 InstanceInitiatedShutdownBehavior: stop InstanceType: t2.micro Monitoring: false4.    Enter the instance ID value into the Identifier field.5.    Choose Import resource.After CloudFormation moves to IMPORT_COMPLETE status, the instances are part of the stack again.Note: You might receive the error, There was an error creating this change set. As part of the import operation, you cannot modify or add [Outputs]. To resolve this issue, verify that the Outputs sections of the latest CloudFormation template and the template that your stack is using are the same. If they're not, update the latest CloudFormation template to match the values in the Outputs section of the template that your stack is using. Then, update the stack again.Follow"
https://repost.aws/knowledge-center/cloudformation-update-volume-instance
Why can't I connect to my VPC when using an AWS Site-to-Site VPN connection that terminates on a virtual private gateway?
I'm using an AWS Site-to-Site VPN connection that terminates on a virtual private gateway (VGW). But I can't access resources in the virtual private cloud (VPC).
"I'm using an AWS Site-to-Site VPN connection that terminates on a virtual private gateway (VGW). But I can't access resources in the virtual private cloud (VPC).ResolutionUsing the AWS Management Console, check that the Site-to-Site VPN connection's tunnel status is UP. If the connection is DOWN, then follow the troubleshooting steps for resolving connection downtime for phase 1 failures and phase 2 failures.Verify that the encryption domain that's configured on the customer gateway device is broad enough to cover the local (on-premises) and remote (AWS) network CIDRs. Site-to-Site VPN is a route-based virtual private network (VPN) solution, so by default the local and remote network CIDRs are set to any/any (0.0.0.0/0). AWS limits the number of security associations (SAs) to a single pair for both inbound and outbound security associations. So, if multiple networks are defined to communicate through the tunnel, then multiple security associations are negotiated. This setup can cause a network connectivity failure.For an active/active setup (where both tunnels are UP), make sure that asymmetric routing is supported and activated on the customer gateway device. If you haven't turned on asymmetric routing, then AWS randomly selects the egress tunnel (AWS to customer gateway traffic). For dynamic VPN, use AS PATH prepending or MED BGP attributes to use a single tunnel for return traffic from the VPC to the customer gateway device.For a static VPN, make sure that the remote on-premises network routes are defined on the VPN connection. Also, make sure that you've created a corresponding reverse route for the VPC CIDRs on the customer gateway device. This reverse route is used to route traffic through the virtual tunnel interface (VTI).For dynamic VPN, make sure that the customer gateway device is advertising the local routes to the AWS peers. Also check that the customer gateway device is receiving the VPC network CIDRs.Verify the routing on the VPC route tables. It's a best practice to turn on VGW route propagation to automatically propagate the VPN routes to the VPC route tables. Or if route propagation is turned off, you can add a static route for the on-premises network to route through the VGW.Verify that traffic is allowed on both the subnet network ACL and target resource security group. For more information, see Control traffic to resources using security groups and Controlling traffic to subnets using Network ACLs.Confirm that traffic is allowed (inbound and outbound) on the target host or instance firewall. On Windows OS, check that the Windows firewall allows traffic. For Linux systems, verify that IP tables, firewalls, and other similar host firewalls have allowed the corresponding traffic.Check if the application that's running on the target server is listening on the expected port and protocol (TCP/UDP). Run the following commands:Windows CMD:> netstat -aLinux terminal:$ sudo netstat -plantuRelated informationHow to determine which program uses or blocks specific Transmission Control Protocol ports in Windows Server 2003 in Windows documentationFollow"
https://repost.aws/knowledge-center/vpn-virtual-gateway-connection
How do I install the CONNECTION_CONTROL and CONNECTION_CONTROL_FAILED_LOGIN_ATTEMPTS plugins in Amazon RDS for MySQL?
I want to install the CONNECTION_CONTROL and CONNECTION_CONTROL_FAILED_LOGIN_ATTEMPTS plugins for my Amazon Relational Database Service (Amazon RDS) for MySQL database.
"I want to install the CONNECTION_CONTROL and CONNECTION_CONTROL_FAILED_LOGIN_ATTEMPTS plugins for my Amazon Relational Database Service (Amazon RDS) for MySQL database.ResolutionNote: The following steps apply only to Amazon RDS for MySQL. They don't apply to Amazon Aurora MySQL-Compatible Edition.The CONNECTION_CONTROL pluginCONNECTION_CONTROL (from the MySQL website) checks incoming connection attempts and adds a delay to server responses as necessary. This plugin also reveals system variables that allow for its configuration and a status variable that provides rudimentary monitoring information.CONNECTION_CONTROL doesn't come with default MySQL configurations. Therefore, you must configure the plugin after you install it.Install CONNECTION_CONTROLTo install the CONNECTION_CONTROL plugin in MySQL, run the following commands in the MySQL Command-Line Client:mysql INSTALL PLUGIN CONNECTION_CONTROLSONAME 'connection_control.so';This returns an output that's similar to the following message:Query OK, 0 rows affected (0.01 sec)For more information, see Installing connection control plugins on the MySQL website.Check the plugin's variablesYou can now verify the following variables that relate to the plugin:connection_control_failed_connections_thresholdconnection_control_max_connection_delayconnection_control_min_connection_delayTo check these variables, run the following commands:mysql SHOW VARIABLES LIKE 'connection_control%';This returns an output that's similar to the following message:+-------------------------------------------------+------------+| Variable_name | Value | +-------------------------------------------------+------------+| connection_control_failed_connections_threshold | 3 || connection_control_max_connection_delay | 2147483647 || connection_control_min_connection_delay | 1000 |+-------------------------------------------------+------------+You can't modify the values of these variables, and you must use these values by default. For more information, see Connection-control system and status variables on the MySQL website.The CONNECTION_CONTROL_FAILED_LOGIN_ATTEMPTS pluginCONNECTION_CONTROL_FAILED_LOGIN_ATTEMPTS implements an INFORMATION_SCHEMA table that reveals more detailed monitoring information for failed connection attempts.Install CONNECTION_CONTROL_FAILED_LOGIN_ATTEMPTSTo install the CONNECTION_CONTROL_FAILED_LOGIN_ATTEMPTS plugin in MySQL, run the following commands:mysqlINSTALL PLUGIN CONNECTION_CONTROL_FAILED_LOGIN_ATTEMPTSSONAME 'connection_control.so';This returns an output that's similar to the following message:Query OK, 0 rows affected (0.00 sec)View the plugins' statusTo view the status of these plugins, run the following commands:mysql SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME LIKE 'connection%'; command.This returns an output that's similar to the following message:+------------------------------------------+---------------+| PLUGIN_NAME | PLUGIN_STATUS |+------------------------------------------+---------------+| CONNECTION_CONTROL | ACTIVE || CONNECTION_CONTROL_FAILED_LOGIN_ATTEMPTS | ACTIVE |+------------------------------------------+---------------+This confirms that the status of the plugins is ACTIVE. You can now learn about any failed login attempts, compare those failures with your third-party assessment tools, and post the assessment.Related informationUNINSTALL PLUGIN statement (MySQL website)Follow"
https://repost.aws/knowledge-center/rds-mysql-connection-control-plugin
How do I troubleshoot "Error Code: 503 Slow Down" on s3-dist-cp jobs in Amazon EMR?
"My S3DistCp (s3-dist-cp) job on Amazon EMR job fails due to Amazon Simple Storage Service (Amazon S3) throttling. I get an error message similar to the following:mapreduce.Job: Task Id : attempt_xxxxxx_0012_r_000203_0, Status : FAILED Error: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Slow Down (Service: Amazon S3; Status Code: 503; Error Code: 503 Slow Down; Request ID: D27E827C847A8304; S3 Extended Request ID: XWxtDsEZ40GLEoRnSIV6+HYNP2nZiG4MQddtNDR6GMRzlBmOZQ/LXlO5zojLQiy3r9aimZEvXzo=), S3 Extended Request ID: XWxtDsEZ40GLEoRnSIV6+HYNP2nZiG4MQddtNDR6GMRzlBmOZQ/LXlO5zojLQiy3r9aimZEvXzo= at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)"
"My S3DistCp (s3-dist-cp) job on Amazon EMR job fails due to Amazon Simple Storage Service (Amazon S3) throttling. I get an error message similar to the following:mapreduce.Job: Task Id : attempt_xxxxxx_0012_r_000203_0, Status : FAILED Error: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Slow Down (Service: Amazon S3; Status Code: 503; Error Code: 503 Slow Down; Request ID: D27E827C847A8304; S3 Extended Request ID: XWxtDsEZ40GLEoRnSIV6+HYNP2nZiG4MQddtNDR6GMRzlBmOZQ/LXlO5zojLQiy3r9aimZEvXzo=), S3 Extended Request ID: XWxtDsEZ40GLEoRnSIV6+HYNP2nZiG4MQddtNDR6GMRzlBmOZQ/LXlO5zojLQiy3r9aimZEvXzo= at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)Short description"Slow Down" errors occur when you exceed the Amazon S3 request rate (3,500 PUT/COPY/POST/DELETE and 5,500 GET/HEAD requests per second per prefix in a bucket). This often happens when your data uses Apache Hive-style partitions. For example, the following Amazon S3 paths use the same prefix (/year=2019/). This means that the request limit is 3,500 write requests or 5,500 read requests per second.s3://awsexamplebucket/year=2019/month=11/day=01/mydata.parquets3://awsexamplebucket/year=2019/month=11/day=02/mydata.parquets3://awsexamplebucket/year=2019/month=11/day=03/mydata.parquetIf increasing the number of partitions isn't an option, reduce the number of reducer tasks or increase the EMR File System (EMRFS) retry limit to resolve Amazon S3 throttling errors.ResolutionUse one of the following options to resolve throttling errors on s3-dist-cp jobs.Reduce the number of reducesThe mapreduce.job.reduces parameter sets the number of reduces for the job. Amazon EMR automatically sets mapreduce.job.reduces based on the number of nodes in the cluster and the cluster's memory resources. Run the following command to confirm the default number of reduces for jobs in your cluster:$ hdfs getconf -confKey mapreduce.job.reducesTo set a new value for mapreduce.job.reduces, run a command similar to the following. This command sets the number of reduces to 10.$ s3-dist-cp -Dmapreduce.job.reduces=10 --src s3://awsexamplebucket/data/ --dest s3://awsexamplebucket2/output/Increase the EMRFS retry limitBy default, the EMRFS retry limit is set to 4. Run the following command to confirm the retry limit for your cluster:$ hdfs getconf -confKey fs.s3.maxRetriesTo increase the retry limit for a single s3-dist-cp job, run a command similar to the following. This command sets the retry limit to 20.$ s3-dist-cp -Dfs.s3.maxRetries=20 --src s3://awsexamplebucket/data/ --dest s3://awsexamplebucket2/output/To increase the retry limit on a new or running cluster:New cluster: Add a configuration object similar to the following when you launch a cluster.Running cluster: Use the following configuration object to override the cluster configuration for the instance group (Amazon EMR release versions 5.21.0 and later).[ { "Classification": "emrfs-site", "Properties": { "fs.s3.maxRetries": "20" } }]When you increase the retry limit for the cluster, Spark and Hive applications can also use the new limit. Here's an example of a Spark shell session that uses the higher retry limit:spark> sc.hadoopConfiguration.set("fs.s3.maxRetries", "20")spark> val source_df = spark.read.csv("s3://awsexamplebucket/data/")spark> source_df.write.save("s3://awsexamplebucket2/output/")Related informationBest practices design patterns: optimizing Amazon S3 performanceWhy does my Spark or Hive job on Amazon EMR fail with an HTTP 503 "Slow Down" AmazonS3Exception?Follow"
https://repost.aws/knowledge-center/503-slow-down-s3-dist-cp-emr
Why am I unable to mount my Amazon EFS volumes on my AWS Fargate tasks?
I'm getting errors when I mount my Amazon Elastic File System (Amazon EFS) volumes on my AWS Fargate tasks.
"I'm getting errors when I mount my Amazon Elastic File System (Amazon EFS) volumes on my AWS Fargate tasks.ResolutionAmazon EFS provides a persistent storage solution for your Fargate tasks to share files and data across different tasks.You might be unable to mount your Amazon EFS volumes on your Fargate tasks due to one or more of the following reasons:The Amazon EFS file system isn't configured correctly.The Amazon Elastic Container Service (Amazon ECS) task IAM role doesn't have the required permissions.There are issues related to network and Amazon Virtual Private Cloud (Amazon VPC) configurations.You might get one of the following errors when you try to mount your EFS volume on your Fargate task.ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: b'mount.nfs4: Connection timed out' : unsuccessful EFS utils command execution; code: 32You get the preceding error when your Fargate task can't connect to the EFS filesystem because of connection timing. To resolve this error, try the following troubleshooting steps:1.    Open the Amazon EFS console.2.    In the navigation pane, choose File systems.3.    Choose the file system that you want to check by choosing its Name or the File system ID.4.    Choose Network to display the list of existing mount targets.5.    Choose Manage.You can view the security group and the security group's inbound rules for the mount targets.Be sure that the inbound rule for the security group allows traffic from the Fargate task security group on port 2049. Confirm that network traffic is allowed at the subnet level. To confirm, verify that the network access control list allows traffic between the file system and task. If the traffic isn't allowed, then modify the rules accordingly. For more information, see Security in the VPC with public and private subnets (NAT) documentation.ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: mount.nfs4: Connection reset by peer : unsuccessful EFS utils command execution; code: 32You get the preceding error due to one of the following reasons:You mounted the EFS file system immediately after creating the file system.The security group for the mount target doesn't allow inbound traffic from Fargate tasks on port 2049.You're using AWS App Mesh, and outbound to port 2049 is blocked because of proxy rules.To troubleshoot this error, follow these steps:Up to 90 seconds can elapse for the DNS records to propagate completely in an AWS Region after creating a mount target. If you're programmatically creating and mounting the file systems, such as with an AWS CloudFormation template, it's a best practice to implement a wait condition.Confirm that the inbound security group rule that's attached to the EFS file system mount targets allows traffic on port 2049 from Fargate tasks.If you're using AppMesh, then make sure that your proxy configuration specified in the TaskDefinition includes 2049 as EgressIgnoredPorts.ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve "fs-xxxxxxxxxxx.efs.us-east-1.amazonaws.com" - check that your file system ID is correctYou get the preceding error due to one of the following reasons:The EFS file system mount target isn't created or available in an Availability Zone where Fargate tasks are launched.You're using a custom DNS server for the VPC.The VPC DNS hostnames are turned off. DNS hostnames are turned off by default.To resolve this error, try the following steps:Be sure that the EFS file system mount target is in the same Availability Zone as the Fargate task. You can view the Availability Zone, subnet, and security group of the mount target in the Amazon EFS console. Then, verify that the mount target uses the same Availability Zone and subnet as the Fargate task.If you specified a custom DNS server for your VPC DHCP options instead of AmazonProvidedDNS, then be sure to configure conditional DNS forwarders. The DNS forwarders must send the DNS queries of AWS resources (*.amazonaws.com) to the VPC's default DNS server at VPC CIDR .2 or 169.254.169.253. For more information, see How to set up DNS resolution between on-premises networks and AWS using AWS Directory Service and Microsoft Active Directory.ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: b'mount.nfs4: access denied by server while mounting 127.0.0.1:/' : unsuccessful EFS utils command execution; code: 32You get the preceding error when access to the file system is denied by the following policies and permissions:The file system policyThe task role policyThe POSIX file system level permissionsAccess to an EFS file system might be controlled by permissions that are defined in the following resources:The network access control listSecurity groupsEFS file system policiesECS task role IAM policyA POSIX fileFor more information, see Developers guide to using Amazon EFS with Amazon ECS and AWS Fargate – Part 2.To troubleshoot this error, check if the file system policy or the ECS task role IAM policy denies access to the file system. If these policies deny permissions, then modify the policies to grant permissions to access the file system. If the file system policy doesn't exist, then access to the file system is granted by default to all principals during creation.Related informationCreating Amazon EFS file systemsCreating and managing mount targets and security groupsHow do I mount an Amazon EFS file system on an Amazon ECS container or task running on Fargate?File system mount fails immediately after file system creationFollow"
https://repost.aws/knowledge-center/fargate-unable-to-mount-efs
How do I create an SQL Server Always On availability group cluster in the AWS Cloud?
I want to create an SQL Server Always On availability group cluster in the AWS Cloud. How can I do this?
"I want to create an SQL Server Always On availability group cluster in the AWS Cloud. How can I do this?Short descriptionTo create an SQL Server Always On availability group cluster in the AWS Cloud, first configure two secondary IPs for each cluster node elastic network interface. Then, use Remote Desktop Protocol (RDP) to connect as a Domain Administrator account to the cluster node instances. Finally, create a two-node Windows cluster and SQL Server Always On availability groups.You can also use AWS Launch Wizard to create an SQL Server Always On availability group deployment. The Launch Wizard identifies the AWES resources to provision the SQL Server databases automatically based on your use case. For more information, see What is AWS Launch Wizard for SQL Server?ResolutionPrerequisitesLaunch two Amazon Elastic Compute Cloud (Amazon EC2) Windows Server instances (version 2012 R2 or later) across Availability Zones inside a Virtual Private Cloud (VPC).Use SQL Server 2014 64-bit Enterprise edition or later. For testing, use SQL Server 2014 64-bit Evaluation edition or later.Configure secondary Amazon Elastic Block Store (Amazon EBS) volumes to host SQL Server Master Data File, Log Data File, and SQL Backup files. It's a best practice to choose Provisioned IOPS SSD (io1) EBS volumes for large SQL Server database workloads.Deploy the cluster nodes in private subnets. You can then use RDP to connect from a jump server to the cluster node instances.Configure security group inbound rules and Windows Firewall exceptions to allow the nodes to communicate in a restrictive environment. See Configure the Windows Firewall to allow SQL Server access on the Microsoft website.Active Directory (AD) domain controllers must have all necessary ports opened for the SQL nodes and witness to join the domain and authenticate against Active Directory. See Active Directory and Active Directory Domain Services Port Requirements on the Microsoft website.Join the nodes to the domain before creating the Windows failover cluster. Verify that you are logged in using domain credentials before creating and configuring the cluster.Run the SQL DB instances with an Active Directory service account.Create an SQL login with sysadmin permission using Windows domain authentication. Consult your Database Administrator for more information. For more details, see Create a login using SSMS on the Microsoft website.Verify that the SQL browser is configured properly. This is required only for SQL Server named instances.Configure the secondary IPs for each cluster node elastic network interfaceA secondary IP is required for each cluster node eth0 elastic network interface. If you want an SQL Group Listener, then you need to add a third IP address. This results in total of 3 IPs that are attached to the eth0 elastic network interface.Note: If you don't plan to deploy an SQL Group Listener, add only one secondary IP for each cluster node elastic network interface.1.    Open the Amazon EC2 console, and then choose the AWS Region that will host your Always On cluster.2.    Choose Instances from the navigation pane, and then select your EC2 cluster instance.3.    Choose the Networking tab.4.    Under Network interfaces, choose the Interface ID elastic network interface.5.    Select the network interface, and then choose Actions, Manage IP addresses.6.    Choose the arrow next to the network interface ID to expand the window, and then choose Assign new IP address. You can select a specific IP or leave the field as Auto-assign. Repeat this step to add a second new IP.7.    Choose Save, Confirm.8.    Repeat steps 1-7 for the other EC2 instance that will participate in the cluster.Create a two-node Windows cluster1.    Connect to your EC2 instance using RDP with a domain account that has local Administrator permissions on both nodes.2.    On the Windows Start menu, open Control Panel, and then choose Network and Sharing Center.3.    Choose Change adapter settings from the navigation pane.4.    Select your network connection, and then choose Change settings of this connection.5.    Select Internet Protocol Version 4 (TCP/IPv4), and then choose Properties.6.    Choose Advanced.7.    On the DNS tab, choose Append primary and connection specific DNS suffixes.8.    Choose Ok, choose Ok, and then choose Close.9.    Repeat steps 1-8 for the other EC2 instance that will participate in the cluster.10.    On each instance, install the cluster feature on the nodes from the Server Manager, or run the following PowerShell command:Install-WindowsFeature –Name Failover-Clustering –IncludeManagementTools11.    Open cmd as Administrator, and enter cluadmin.msc to open the Cluster Manager.12.    Open the context (right-click) menu for Failover Cluster Manager, and then choose Create Cluster.13.    Choose Next, and then choose Browse.14.    For Enter the object names to select, enter the cluster node hostnames, and then choose Ok.15.    Choose Next. You can now choose whether you want to validate the cluster. It's a best practice to run a cluster validation. If the cluster doesn't pass validation, Microsoft might not be able to provide technical support for your SQL cluster. Choose Yes or No, and then choose Next.16.    For Cluster Name, enter a name, and then choose Next.17.    Clear Add all eligible storage to the cluster, and then choose Next.18.    When the cluster creation is complete, choose Finish.Note: Cluster logs and reports are located at the following path: %systemroot%\cluster\reports19.    In the Cluster Core Resources section of Cluster Manager, expand the entry for your new cluster.20.    Open the context (right-click) menu for the first IP Address entry, and then choose Properties. For IP Address, choose Static IP Address, and then enter one of the secondary IPs associated with eth0 elastic network interface. Choose Ok. Repeat this step for the second IP Address entry.21.    Open the context (right-click) menu for the cluster name, and then choose Bring Online.Note: It's a best practice to also configure a File Share Witness (FSW) to act as a tie breaker. You can also use Amazon FSx for Windows File Server with Microsoft SQL Server. For information on FSW, see Failover Cluster Step-by-Step Guide: Configuring the Quorum in a Failover Cluster on the Microsoft website.Create Always On availability groups1.    Open SQL Server Configuration Manager.2.    Open the context (right-click) menu for the SQL instance, and then choose Properties.3.    On the AlwaysOn High Availability tab, select Enable AlwaysOn Availability Groups, and then choose Apply.4.    Open the context (right-click) menu for the SQL instance, and then choose Restart.5.    Repeat steps 1-4 on the other cluster node part of the cluster.6.    Open Microsoft SQL Server Management Studio (SSMS).7.    Log in to one of the SQL instances with your Windows authenticated login that has access to the SQL instance.Note: It's a best practice to use the same MDF and LDF directory file paths across the SQL instances.8.    Create a test database. Open the context (right-click) menu for Databases, and then choose New Database.Note: Be sure to use the Full recovery model on the Options page. For more information, see Recovery Models (SQL Server) on the Microsoft website.9.    For Database name, enter a name, and then choose Ok.10.    Open the context (right-click) menu for the new database name, choose Tasks, and then choose Back Up. For Backup type, choose Full.11.    Choose Ok, and then choose Ok.12.    Open the context (right-click) menu for Always On High Availability, and then choose New Availability Group Wizard.13.    Choose Next.14.    For Availability group name, enter a name, and then choose Next.15.    Select your database, and then choose Next.16.    A primary replica is already present in the Availability Replicas window. Choose Add Replica to create a secondary replica.17.    For Server name, enter a name for the secondary replica, and then choose Connect.18.    For Availability Mode, decide which availability mode you want, and then choose Synchronous commit or Asynchronous commit for each replica. For more information, see Differences between availability modes for an Always On availability group on the Microsoft website.19.    Choose Next.20.    Choose your data synchronization preference, and then choose Next. For more information, see Select Initial Data Synchronization Page (Always On Availability Group Wizards) on the Microsoft website.21.    When the validation is successful, choose Next.Note: You can safely ignore Checking the listener configuration, as you'll add it later.22.    Choose Finish, and then choose Close.Add an SQL Group Listener1.    Open SSMS, and then expand Always On High Availability, Availability Groups, [primary replica name].2.    Open the context (right-click) menu for Availability Group Listeners, and then choose Add Listener.For Listener DNS Name, enter a name.For Port, enter 1433.For Network Mode, choose Static IP.3.    Choose Add.For IPv4 Address, enter the third IP address from one of the cluster node instances, and then choose Ok. Repeat this step, using the third IP address from the other cluster node instance.4.    Choose Ok.Note: Errors received when adding an SQL Group Listener indicate missing permissions. For troubleshooting steps, see the following resources on the Microsoft website:KB2829783 - Troubleshooting AlwaysOn availability group listener creation in SQL Server 2012Create Availability Group Listener Fails with Message 19471, 'The WSFC cluster could not bring the Network Name resource online'Test failover1.    Using SSMS, open the context (right-click) menu for the primary replica on the navigation menu, and then choose Failover.2.    Choose Next, and then choose Next.3.    Choose Connect, and then choose Connect.4.    Choose Next, and then choose Finish. The primary replica will become the secondary replica after failover.Related informationBest practices and recommendations for SQL Server clustering on EC2SQL Server with Always-on Replication on AWSAWS Launch WizardConfigure a listener for an Always On availability group (on the Microsoft website)Follow"
https://repost.aws/knowledge-center/ec2-windows-sql-server-always-on-cluster
How can I manage an AWS Managed Microsoft AD or Simple AD directory from an Amazon EC2 Windows instance?
I want to use an Amazon Elastic Compute Cloud (Amazon EC2) Windows instance to manage AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) or Simple AD. How can I do that?
"I want to use an Amazon Elastic Compute Cloud (Amazon EC2) Windows instance to manage AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) or Simple AD. How can I do that?ResolutionFirst, join the EC2 Windows instance to the directory in one of the following ways:To join a new instance to an AWS Managed Microsoft AD or Simple AD directory during launch, see Seamlessly join a Windows EC2 instance.To manually join an existing instance to a directory, see Manually join a Windows instance.Then, to manage the directory from the EC2 Windows instance, install Active Directory administration tools on the instance.Related informationAWS Managed Microsoft ADSimple Active DirectoryAWS Directory Service FAQsFollow"
https://repost.aws/knowledge-center/manage-ad-directory-from-ec2-windows
Why is my application or website hosted on Route 53 unreachable?
"I'm running an application or website on Amazon Route 53. However, I'm unable to access my application or website. How can I troubleshoot this?"
"I'm running an application or website on Amazon Route 53. However, I'm unable to access my application or website. How can I troubleshoot this?ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Check for domain status issues1.    Use the following command to check the domain status:whois domain_name |grep 'status'If the domain status (Extensible Provisioning Protocol code) is "inactive" or "ServerHold" or "ClientHold", the domain won't resolve.2.    If your see an unusual domain status code, including "inactive" or "ServerHold" or "ClientHold", contact your registrar.Use the following command to determine the domain registrar:whois domain_name |grep 'Registrar'Query your preferred Whois utility (domain registration lookup tool) for generic or country-specify top-level domains (TLDs).Check for name server issues1.    Confirm that the authoritative name server is correctly configured at your registrar. To find the authoritative name servers, check the authoritative_nameserver value in the name server (NS) resource record set of the public hosted zone.2.    If you're using Route 53 as your DNS service provider, be sure that you correctly configured each of the four name servers.Use the following command to check the name server configuration:whois domain_name |grep 'Name Server'For example, the output for whois amazon.com |grep 'Name Server' is:Name Server: NS1.P31.DYNECT.NETName Server: NS2.P31.DYNECT.NETName Server: NS3.P31.DYNECT.NETName Server: NS4.P31.DYNECT.NETName Server: PDNS1.ULTRADNS.NETName Server: PDNS6.ULTRADNS.CO.UKCheck for record set issuesUse the following command to check if you've created the required alias (A) record in the hosted zone with the DNS service provider:dig Domain_name record_typeFor example, the output for $dig amazon.com A is:; <<>> DiG 9.10.6 <<>> amazon.com +question;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29804;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 512;; QUESTION SECTION:;amazon.com. IN A;; ANSWER SECTION:amazon.com. 44 IN A 54.239.28.85amazon.com. 44 IN A 205.251.242.103amazon.com. 44 IN A 176.32.103.205;; Query time: 4 msec;; SERVER: 192.168.1.1#53(192.168.1.1);; WHEN: Fri Mar 19 20:28:51 IST 2021;; MSG SIZE rcvd: 87Note: The record type is listed in the Type column of the corresponding resource record set. For more information, see Supported DNS record types.Check for source issuesFor local browsers or mobile devices:Clear your browser cache and then try to access the domain.Check whether you're requesting the correct domain. Mobile device browsers might append "www" when requesting the domain.For an on-premises machine connected to an Amazon Virtual Private Cloud (Amazon VPC) or AWS resource using VPC .2 Resolver:If you have private and public hosted zones with overlapping namespaces, such as "example.com" and "accounting.example.com", then Resolver routes traffic based on the most specific match. If there's a matching private hosted zone but no record that matches the domain name and type in the request, then Resolver doesn't forward the request to a public DNS resolver. Instead, it returns an NXDOMAIN (non-existent domain) error to the client. If you unintentionally created a private hosted zone with overlapping namespaces, you can delete the private hosted zone.Check for record caching issues1.    Use the following command to check if the record value returned from the DNS resolver matches the value returned from the authoritative name server. If the domain isn't resolving to the expected IP address, the DNS resolver might have cached the value. Clear your browser cache if the domain is resolving to an unexpected IP address.dig domain_name record_type @authorative_name_serverFor example, the output for $dig amazon.com @NS1.P31.DYNECT.NET is:; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.68.rc1.64.amzn1 <<>> amazon.com @NS1.P31.DYNECT.NET;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 63711;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0;; WARNING: recursion requested but not available;; QUESTION SECTION:;amazon.com. IN A;; ANSWER SECTION:amazon.com. 60 IN A 205.251.242.103amazon.com. 60 IN A 54.239.28.85amazon.com. 60 IN A 176.32.103.205;; Query time: 2 msec;; SERVER: 208.78.70.31#53(208.78.70.31) ;; WHEN: Fri Mar 19 15:08:52 2021;; MSG SIZE rcvd: 762.    Use the following command to check if you're seeing the same results with the public resolver. If the public resolver is returning the expected answer, the issue is likely with the DNS resolver on the local machine.dig domain @public_resolver_IpFor example, the output for $dig amazon.com @8.8.8.8 is:; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.68.rc1.64.amzn1 <<>> amazon.com @8.8.8.8;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26860;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0;; QUESTION SECTION:;amazon.com. IN A;; ANSWER SECTION:amazon.com. 15 IN A 205.251.242.103amazon.com. 15 IN A 54.239.28.85amazon.com. 15 IN A 176.32.103.205;; Query time: 1 msec;; SERVER: 8.8.8.8#53(8.8.8.8);; WHEN: Fri Mar 19 15:09:41 2021;; MSG SIZE rcvd: 76Check for DNSSEC issuesConfirm that you've correctly configured DNSSEC for your domain. Use the DNSSEC analyzer tool or your preferred utility to see if there are DNSSEC issues with the domain.Pass the DNSSEC and see if you're getting expected results:dig domain_name +cdFor example, the output for $ dig amazon.com +cd is:; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.68.rc1.64.amzn1 <<>> amazon.com +cd;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55636;; flags: qr rd ra cd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0;; QUESTION SECTION:;amazon.com. IN A;; ANSWER SECTION:amazon.com. 29 IN A 205.251.242.103amazon.com. 29 IN A 176.32.103.205amazon.com. 29 IN A 54.239.28.85;; Query time: 2 msec;; SERVER: 1.1.1.1#53(1.1.1.1);; WHEN: Fri Mar 19 15:10:13 2021;; MSG SIZE rcvd: 76Check for webserver issuesIf you're seeing the expected IP address for the domain in curl command output, check if you're getting the Expected HTTP response from the server:1XX (Informational)2XX (Successful)3XX (Redirection)4XX (Client Error)5XX (Server Error)If the DNS resolution is working as expected but the server isn't responding, the issue is with the web server where the website or application is hosted.Command:curl -Iv http://domain_name:Port/PathFor example, the output for $ curl -Iv http://amazon.com:80 is:* Rebuilt URL to: http://amazon.com:80/* Trying 176.32.103.205... <--- Indicates no issues with the DNS resolution as we are getting expected IP address for the domain amazon.com.* TCP_NODELAY set* Connected to amazon.com (176.32.103.205) port 80 (#0)> HEAD / HTTP/1.1> Host: amazon.com> User-Agent: curl/7.61.1> Accept: */*> < HTTP/1.1 301 Moved PermanentlyHTTP/1.1 301 Moved Permanently< Server: ServerServer: Server< Date: Fri, 19 Mar 2021 15:11:18 GMTDate: Fri, 19 Mar 2021 15:11:18 GMT< Content-Type: text/htmlContent-Type: text/html< Content-Length: 179Content-Length: 179< Connection: keep-aliveConnection: keep-alive< Location: https://amazon.com/Location: https://amazon.com/< * Connection #0 to host amazon.com left intactNote: The Port value is the web server port on which the website or application is configured to listen.Follow"
https://repost.aws/knowledge-center/route-53-fix-unreachable-app-or-site
How can I generate server and client certificates and their respective keys on a Windows server and upload them to ACM?
How can I generate server and client certificates and their respective keys on a Windows server and upload them to AWS Certificate Manager (ACM)?
"How can I generate server and client certificates and their respective keys on a Windows server and upload them to AWS Certificate Manager (ACM)?ResolutionGenerate the server and client certificates and their respective keys1.    Go to the OpenVPN Community Downloads page.2.    Select the Windows Installer (.exe) file for the Windows OS version that you're running. Then, choose Run.3.    Complete the OpenVPN Setup Wizard:Choose Next.Review the license agreement, and then choose I Agree.For Choose Components, select EasyRSA 2 Certificate Management Scripts.Choose Next, and then choose Install.4.    After the OpenVPN software is installed, open a command prompt and navigate to the easy-rsafolder:cd \Program Files\OpenVPN\easy-rsa5.    Start the OpenVPN configuration:init-config6.    Open the vars.bat file in a text editor:notepad vars.bat Set KEY_Size=2048. Then, set values for KEY_COUNTRY, KEY_PROVINCE, KEY_CITY, KEY_ORG, and KEY_EMAIL. Don’t leave any of these parameters blank.Save and close your text editor.7.    Run the following commands to set the above variables for the certificate authority (CA) certificate, initialize the public key infrastructure (PKI), and build the CA certificate:varsclean-allbuild-caAt the prompt, leave all fields as the default values. Optionally, you can change the Common Nameto your server's domain name.8.    Run the following command to generate a certificate and private key for the server:build-key-server serverAt the prompt, change the Common Nameto your server's domain name using the format server.example.com. Leave all of the remaining fields as the default values.9.    Run the following command to generate a certificate and private key for the client:build-key client1At the prompt,changethe Common Nameto your client's domain name using the format client1.example.com. Leave all of the remaining fields as the default values.10.    (Optional) If needed, create additional client certificates and keys.build-key client2At the prompt, change the Common Name to your client's domain name using the format client2.example.com. Leave all of the remaining fields as the default values.Important: If you don't follow the format specified above for setting common names, the domain names aren't available when you import the certificate into ACM. As a result, the certificate isn't an available option for specifying the server certificate or client certificate when you create the AWS Client VPN endpoint.Import the server and client certificates and keys into ACMNote: The server and client certificates, and their respective keys, are available in C:\Program Files\OpenVPN\easy-rsa\keys.1.    Open the following files: server.crt, server.key, client1.crt, client1.key, and ca.crt.2.    Open the ACM console, and then choose Import a certificate.3.    On the Import a certificatepage, copy/paste the content:From the server.crtfile to Certificate body.From the server.keyfile to Certificate private key.          From the ca.crtfile to Certificate chain.     4.    Choose Import to import the server certificate.5.    Choose Import a certificateagain and copy/paste the content:                 From the client1.crtfile to Certificate body.From the client1.key file to Certificate private key.            From the ca.crt fileto Certificate chain.        6.    Choose Import to import the client certificate.Or, you can use the AWS Command Line Interface (AWS CLI) to import the server and client certificates and their keys into ACM:cd C:\Program Files\OpenVPN\easy-rsa\keysaws acm import-certificate --certificate file://"C:\Program Files\OpenVPN\easy-rsa\keys\server.crt" --private-key file://"C:\Program Files\OpenVPN\easy-rsa\keys\server.key" --certificate-chain file://"C:\Program Files\OpenVPN\easy-rsa\keys\ca.crt"aws acm import-certificate --certificate file://"C:\Program Files\OpenVPN\easy-rsa\keys\client1.crt" --private-key file://"C:\Program Files\OpenVPN\easy-rsa\keys\client1.key" --certificate-chain file://"C:\Program Files\OpenVPN\easy-rsa\keys\ca.crt"Confirm that you have successfully created and imported your server and client certificates1.    Open the ACM console.2.    In the certificates list, confirm that Issueddisplays in the Status columnfor your server and client certificates.Related InformationMutual Authentication (for AWS Client VPN)Follow"
https://repost.aws/knowledge-center/client-vpn-generate-certs-keys-windows
How do I increase my custom origin's response timeout in CloudFront?
How do I increase the amount of time that Amazon CloudFront waits for a response from my custom origin?
"How do I increase the amount of time that Amazon CloudFront waits for a response from my custom origin?Short descriptionTo adjust the timeout value that CloudFront uses when communicating with your custom origin, change the origin's response timeout setting in the CloudFront console.Important: If you're getting HTTP 504 errors from CloudFront, make sure that you verify the following before you increase your origin's response timeout:The firewall and security groups on your origin server allow CloudFront traffic.The origin server is accessible on the internet.The server timeouts aren't being caused by delayed responses from applications on your origin server.For more information, see HTTP 504 status code (Gateway Timeout) in the CloudFront Developer Guide.Resolution1.    Open the CloudFront console.2.    In the Distributions pane, in the ID column, select the ID of the distribution that you want to edit.3.    Choose the Origins tab. The Origins page appears.4.    In the Origin name column, select the check box next to the origin name that you want to edit. Then, choose Edit. The Edit origin page appears.5.    In the Settings pane, choose Additional settings.6.    For Response timeout, enter the new timeout value.7.    Choose Save changes.Note: To configure a Response timeout value that's greater than 60 seconds, you must request a quota increase. The default Response timeout value is 30 seconds.Follow"
https://repost.aws/knowledge-center/cloudfront-custom-origin-response
"How can I change the default Amazon SNS email subject line, "AWS Notification Message", for an EventBridge rule?"
"I configured my Amazon Simple Notification Service (Amazon SNS) to receive email notifications from an Amazon EventsBridge rule that has multiple event sources. How do I customize the default SNS email subject line, "AWS Notification Message," and email body based on the event that triggers the notification?"
"I configured my Amazon Simple Notification Service (Amazon SNS) to receive email notifications from an Amazon EventsBridge rule that has multiple event sources. How do I customize the default SNS email subject line, "AWS Notification Message," and email body based on the event that triggers the notification?Short descriptionThere is currently no way to customize the message of an Amazon SNS email based on specific EventBridge rules using the Amazon SNS console.Use a Lambda function, instead of the Amazon SNS topic, as a target for the EventBridge rule. Then, configure the Lambda function to publish a custom message to the Amazon SNS topic when triggered by the EventBridge rule.Here's how the logic works:The EventBridge rule is triggered.The Lambda function invokes with the payload of the EventBridge rule.The function calls the Amazon SNS publish API.Amazon SNS delivers an email notification with a custom subject or message.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Create an AWS Lambda function that is set as a target for the EventBridge ruleFor more information, see Tutorial: Schedule AWS Lambda functions using EventBridge.Important: Make sure that the Lambda function’s execution role has permission to publish to the Amazon SNS topic. For example, if your function's execution role has the AWS managed policy, AWSLambdaBasicExecutionRole, you must attach the AmazonSNSFullAccess policy to the execution role.After the Lambda function is set as a target for the EventBridge rule, the following resource-based policy is automatically added to the function:{ "Version": "2012-10-17", "Id": "default", "Statement": [ { "Sid": "AWSEvents_CWRule_CustomEmailSubject_Id196649187337", "Effect": "Allow", "Principal": { "Service": "events.amazonaws.com" }, "Action": "lambda:InvokeFunction", "Resource": "Lambda-function-ARN", "Condition": { "ArnLike": { "AWS:SourceArn": "Eventbridge-rule-ARN" } } } ]}Configure the Lambda function to publish a custom email subject or a custom message to the Amazon SNS topicFor more information, see Publish (API reference).Important: The following code snippets are for reference only. Don't use the code snippets in your production environment before testing them.Example Python publish API callimport boto3import jsonsns_arn = "SNS_TOPC_ARN"def lambda_handler(event, context): client = boto3.client("sns") resp = client.publish(TargetArn=sns_arn, Message=json.dumps(event), Subject="CUSTOM_SUBJECT")Example JavaScript/Node.js publish API callconst AWS = require('aws-sdk');exports.handler = (event, context, callback) => { let sns = new AWS.SNS(); sns.publish({ TopicArn: 'SNS_TOPIC_ARN', Message: JSON.stringify(event), Subject: 'CUSTOM_SUBJECT' }, function(error, data){ if(error) console.log(error, error.stack); callback(error, data); });};Important: Make sure that you replace the values for SNS_TOPIC_ARN and CUSTOM_SUBJECT with your own inputs.In this way, you can use a Lambda function to customize and forward a EventBridge rule’s email subject or message to an Amazon SNS topic.Related informationTutorial: Create an Amazon EventBridge rule for AWS CloudTrail API callsFollow"
https://repost.aws/knowledge-center/change-sns-email-for-eventbridge
Can I associate multiple SSL certificates with my Amazon CloudFront distribution?
I'm serving multiple CNAMEs (alternate domain names) through my Amazon CloudFront distribution. I want to turn on Secure Sockets Layer (SSL) or HTTPS for all the associated CNAMEs.
"I'm serving multiple CNAMEs (alternate domain names) through my Amazon CloudFront distribution. I want to turn on Secure Sockets Layer (SSL) or HTTPS for all the associated CNAMEs.ResolutionYou can't associate more than one SSL or Transport Layer Security (TLS) certificate to an individual CloudFront distribution. However, certificates provided by AWS Certificate Manager (ACM) support up to 10 subject alternative names, including wildcards. To turn on SSL or HTTPS for multiple domains served through one CloudFront distribution, assign a certificate from ACM that includes all the required domains.To use an SSL certificate for multiple domain names with CloudFront, import your certificate into ACM or the AWS Identity and Access Management (IAM) certificate store. For instructions, see Importing an SSL/TLS Certificate.Related informationRequirements for using SSL/TLS certificates with CloudFrontFollow"
https://repost.aws/knowledge-center/associate-ssl-certificates-cloudfront
What do I do if I notice unauthorized activity in my AWS account?
"I see resources that I don't remember creating in the AWS Management Console. Or, I received a notification that my AWS resources or account might be compromised."
"I see resources that I don't remember creating in the AWS Management Console. Or, I received a notification that my AWS resources or account might be compromised.Short descriptionIf you suspect unauthorized activity in your AWS account, first verify if the activity was unauthorized by doing the following:Identify any unauthorized actions taken by the AWS Identity and Access Management (IAM) identities in your account.Identify any unauthorized access or changes to your account.Identify the creation of any unauthorized resources.Identify the creation of any unauthorized IAM resources, such as roles, managed policies, or management changes such as fraudulent linked accounts created in your AWS Organization.Then, if you see unauthorized activity, follow the instructions in the If there was unauthorized activity in your AWS account section of this article.Note: If you can't sign in to your account, see What do I do if I'm having trouble signing in to or accessing my AWS account?ResolutionVerify if there was unauthorized activity in your AWS accountIdentify any unauthorized actions taken by the IAM identities in your accountDetermine the last time that each IAM user password or access key was used. For instructions, see Getting credential reports for your AWS account.Determine what IAM users, user groups, roles, and policies were used recently. For instructions, see Viewing last accessed information for IAM.Identify any unauthorized access or changes to your accountFor instructions, see How can I monitor the account activity of specific IAM users, roles, and AWS access keys? Also, see How can I troubleshoot unusual resource activity with my AWS account?Identify the creation of any unauthorized resources or IAM usersTo identify any unauthorized resource usage, including unexpected services, Regions, or charges to your account, review the following:Cost & Usage Reports for your accountThe AWS Trusted Advisor check referenceThe Bills page of the AWS Management ConsoleNote: You can also use AWS Cost Explorer to review the charges and usage associated with your AWS account. For more information, see How can I use Cost Explorer to analyze my spending and usage?If there was unauthorized activity in your AWS accountImportant: If you received a notification from AWS about irregular activity in your account, first do the following instructions. Then, respond to the notification in the AWS Support Center with a confirmation of the actions that you completed.Rotate and delete exposed account access keysCheck the irregular activity notification sent by AWS Support for exposed account access keys. If there are keys listed, then do the following for those keys:Create a new AWS access key.Modify your application to use the new access key.Deactivate the original access key.Important: Don't delete the original access key yet. Deactivate the original access key only.Verify that there aren't any issues with your application. If there are issues, reactivate the original access key temporarily to remediate the problem.If your application is fully functional after deactivating the original access key, then delete the original access key.Delete the AWS account root user access keys that you no longer need or didn't create.For more information, see Best practices for managing AWS access keys and Managing access keys for IAM users.Rotate any potentially unauthorized IAM user credentialsOpen the IAM console.In the left navigation pane, choose Users. A list of the IAM users in your AWS account appears.Choose the name of the first IAM user on the list. The IAM user's Summary page opens.In the Permissions tab, under the Permissions policies section, look for a policy named AWSExposedCredentialPolicy_DO_NOT_REMOVE. If the user has this policy attached, then rotate the access keys for the user.Repeat steps 3 and 4 for each IAM user in your account.Delete any IAM users that you didn't create.Change the password for all of the IAM users that you created and want to keep.If you use temporary security credentials, then see Revoking IAM role temporary security credentials.Check your AWS CloudTrail logs for unsanctioned activityOpen the AWS CloudTrail console.In the left navigation pane, choose Event history.Review for any unsanctioned activity, such as the creation of access keys, policies, roles, or temporary security credentials.Important: Be sure to review the Event time to confirm if the resources were created recently and match the irregular activity.Delete any access keys, policies, roles, or temporary security credentials that you have identified as unsanctioned.For more information, see Working with CloudTrail.Delete any unrecognized or unauthorized resources1.    Sign in to the AWS Management Console. Then, verify that all the resources in your account are resources that you launched. Be sure to check and compare the usage from the previous month to the current one. Make sure that you look for all resources in all AWS Regions—even Regions where you never launched resources. Also, pay special attention to the following resource types:Amazon EC2 instances, Spot Instances, and Amazon Machine Images (AMIs), including instances in the stopped stateAWS Auto Scaling groupsAmazon Elastic Block Store (Amazon EBS) volumes and snapshotsAmazon Elastic Container Service (Amazon ECS) clustersAmazon Elastic Container Registry (Amazon ECR) repositoriesAWS Lambda functions and layersAmazon Lightsail instancesAmazon Route 53 domainsAmazon SageMaker notebook instances2.    Delete any unrecognized or unauthorized resources. For instructions, see How do I terminate active resources that I no longer need on my AWS account?Important: If you must keep any resources for investigation, consider backing up those resources. For example, if you must retain an EC2 instance for regulatory, compliance, or legal reasons, then create an Amazon EBS snapshot before terminating the instance.Recover backed-up resourcesIf you configured services to maintain backups, then recover those backups from their last known uncompromised state.For more information about how to restore specific types of AWS resources, see the following:Restoring from an Amazon EBS snapshot or an AMIRestoring from a DB snapshot (Amazon Relational Database Service (Amazon RDS) DB instances)Restoring previous versions (Amazon Simple Storage Service (Amazon S3) object versions)Verify your account informationVerify that all of the following information is correct in your AWS account:Account name and email addressContact information (Make sure that your phone number is correct)Alternate contactsNote: For more information about AWS account security best practices, see What are some best practices for securing my AWS account and its resources?Related informationAWS security incident response guideAWS security credentialsAWS security audit guidelinesBest practices for Amazon EC2Follow"
https://repost.aws/knowledge-center/potential-account-compromise
How can I tag a root volume from an instance created by CloudFormation?
I want to tag the root volume of my Amazon Elastic Compute Cloud (Amazon EC2) instances that are created through AWS CloudFormation.
"I want to tag the root volume of my Amazon Elastic Compute Cloud (Amazon EC2) instances that are created through AWS CloudFormation.Short descriptionThe tag property of the EC2 instance resource doesn't extend to the volumes that are created through CloudFormation. Tagging can restrict the control that you have over your instances. Tagging helps you manage the costs of specific resources and restrict AWS Identity and Access Management (IAM) policies. Tagging also helps you exert similar control over other resources.Bootstrapping with CloudFormation allows you to tag the Amazon Elastic Block Store (Amazon EBS) root volume of your instance. The bootstrapping method is done through the UserData property of the AWS::EC2::Instance resource. To perform bootstrapping, use AWS Command Line Interface (AWS CLI) commands or standard Windows PowerShell commands after creating your instance.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.ResolutionCreate an instance with a CloudFormation template1.    Open the CloudFormation console.2.    Choose Create Stack, and then choose Design template.3.    In the code editor, on the Parameters tab, choose Template.4.    For Choose template language, choose YAML.5.    Copy either of the following JSON or YAML templates, and then paste that copied template into your code editor.JSON template:{ "AWSTemplateFormatVersion": "2010-09-09", "Description": "AWS CloudFormation Sample Template Tagging Root Volumes of EC2 Instances: This template shows you how to automatically tag the root volume of the EC2 instances that are created through the AWS CloudFormation template. This is done through the UserData property of the AWS::EC2::Instance resource. **WARNING** This template creates two Amazon EC2 instances and an IAM role. You will be billed for the AWS resources used if you create a stack from this template.", "Parameters": { "KeyName": { "Type": "AWS::EC2::KeyPair::KeyName", "Description": "Name of an existing EC2 KeyPair to enable SSH access to the ECS instances." }, "InstanceType": { "Description": "EC2 instance type", "Type": "String", "Default": "t2.micro", "AllowedValues": [ "t2.micro", "t2.small", "t2.medium", "t2.large", "m3.medium", "m3.large", "m3.xlarge", "m3.2xlarge", "m4.large", "m4.xlarge", "m4.2xlarge", "m4.4xlarge", "m4.10xlarge", "c4.large", "c4.xlarge", "c4.2xlarge", "c4.4xlarge", "c4.8xlarge", "c3.large", "c3.xlarge", "c3.2xlarge", "c3.4xlarge", "c3.8xlarge", "r3.large", "r3.xlarge", "r3.2xlarge", "r3.4xlarge", "r3.8xlarge", "i2.xlarge", "i2.2xlarge", "i2.4xlarge", "i2.8xlarge" ], "ConstraintDescription": "Please choose a valid instance type." }, "InstanceAZ": { "Description": "EC2 AZ.", "Type": "AWS::EC2::AvailabilityZone::Name", "ConstraintDescription": "Must be the name of an Availability Zone." }, "WindowsAMIID": { "Description": "The Latest Windows 2016 AMI taken from the public Systems Manager Parameter Store", "Type": "AWS::SSM::Parameter::Value<String>", "Default": "/aws/service/ami-windows-latest/Windows_Server-2016-English-Full-Base" }, "LinuxAMIID": { "Description": "The Latest Amazon Linux 2 AMI taken from the public Systems Manager Parameter Store", "Type": "AWS::SSM::Parameter::Value<String>", "Default": "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2" } }, "Resources": { "WindowsInstance": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId": { "Ref": "WindowsAMIID" }, "InstanceType": { "Ref": "InstanceType" }, "AvailabilityZone": { "Ref": "InstanceAZ" }, "IamInstanceProfile": { "Ref": "InstanceProfile" }, "KeyName": { "Ref": "KeyName" }, "UserData": { "Fn::Base64": { "Fn::Join": [ "", [ "<powershell>\n", "try {\n", "$AWS_AVAIL_ZONE=(Invoke-WebRequest -Uri 'http://169.254.169.254/latest/meta-data/placement/availability-zone' -UseBasicParsing).Content\n ", "$AWS_REGION=$AWS_AVAIL_ZONE.Substring(0,$AWS_AVAIL_ZONE.length-1)\n ", "$AWS_INSTANCE_ID=(Invoke-WebRequest -Uri 'http://169.254.169.254/latest/meta-data/instance-id' -UseBasicParsing).Content\n ", "$ROOT_VOLUME_IDS=((Get-EC2Instance -Region $AWS_REGION -InstanceId $AWS_INSTANCE_ID).Instances.BlockDeviceMappings | where-object DeviceName -match '/dev/sda1').Ebs.VolumeId\n ", "$tag = New-Object Amazon.EC2.Model.Tag\n ", "$tag.key = \"MyRootTag\"\n ", "$tag.value = \"MyRootVolumesValue\"\n ", "New-EC2Tag -Resource $ROOT_VOLUME_IDS -Region $AWS_REGION -Tag $tag\n", "}\n", "catch {\n", "Write-Output $PSItem\n", "}\n", "</powershell>\n" ] ] } }, "Tags": [ { "Key": "Name", "Value": { "Ref": "AWS::StackName" } } ], "BlockDeviceMappings": [ { "DeviceName": "/dev/sdm", "Ebs": { "VolumeType": "io1", "Iops": "200", "DeleteOnTermination": "true", "VolumeSize": "10" } } ] } }, "LinuxInstance": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId": { "Ref": "LinuxAMIID" }, "InstanceType": { "Ref": "InstanceType" }, "AvailabilityZone": { "Ref": "InstanceAZ" }, "IamInstanceProfile": { "Ref": "InstanceProfile" }, "KeyName": { "Ref": "KeyName" }, "UserData": { "Fn::Base64": { "Fn::Join": [ "", [ "#!/bin/sh\n", "AWS_AVAIL_ZONE=$(curl http://169.254.169.254/latest/meta-data/placement/availability-zone)\n", "AWS_REGION=${AWS_AVAIL_ZONE::-1}\n", "AWS_INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)\n", "ROOT_VOLUME_IDS=$(aws ec2 describe-instances --region $AWS_REGION --instance-id $AWS_INSTANCE_ID --output text --query Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId)\n", "aws ec2 create-tags --resources $ROOT_VOLUME_IDS --region $AWS_REGION --tags Key=MyRootTag,Value=MyRootVolumesValue\n" ] ] } }, "Tags": [ { "Key": "Name", "Value": { "Ref": "AWS::StackName" } } ], "BlockDeviceMappings": [ { "DeviceName": "/dev/sdm", "Ebs": { "VolumeType": "io1", "Iops": "200", "DeleteOnTermination": "true", "VolumeSize": "10" } } ] } }, "InstanceRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com" ] }, "Action": [ "sts:AssumeRole" ] } ] }, "Path": "/", "Policies": [ { "PolicyName": "taginstancepolicy", "PolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:Describe*" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:CreateTags" ], "Resource": [ { "Fn::Sub": "arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:volume/*" }, { "Fn::Sub": "arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/*" } ] } ] } } ] } }, "InstanceProfile": { "Type": "AWS::IAM::InstanceProfile", "Properties": { "Path": "/", "Roles": [ { "Ref": "InstanceRole" } ] } } }}YAML template:AWSTemplateFormatVersion: 2010-09-09Description: >- AWS CloudFormation Sample Template Tagging Root Volumes of EC2 Instances: This template shows you how to automatically tag the root volume of the EC2 instances that are created through the AWS CloudFormation template. This is done through the UserData property of the AWS::EC2::Instance resource. **WARNING** This template creates two Amazon EC2 instances and an IAM role. You will be billed for the AWS resources used if you create a stack from this template.Parameters: KeyName: Type: 'AWS::EC2::KeyPair::KeyName' Description: Name of an existing EC2 KeyPair to enable SSH access to the ECS instances. InstanceType: Description: EC2 instance type Type: String Default: t2.micro AllowedValues: - t2.micro - t2.small - t2.medium - t2.large - m3.medium - m3.large - m3.xlarge - m3.2xlarge - m4.large - m4.xlarge - m4.2xlarge - m4.4xlarge - m4.10xlarge - c4.large - c4.xlarge - c4.2xlarge - c4.4xlarge - c4.8xlarge - c3.large - c3.xlarge - c3.2xlarge - c3.4xlarge - c3.8xlarge - r3.large - r3.xlarge - r3.2xlarge - r3.4xlarge - r3.8xlarge - i2.xlarge - i2.2xlarge - i2.4xlarge - i2.8xlarge ConstraintDescription: Please choose a valid instance type. InstanceAZ: Description: EC2 AZ. Type: 'AWS::EC2::AvailabilityZone::Name' ConstraintDescription: Must be the name of an Availability Zone. WindowsAMIID: Description: >- The Latest Windows 2016 AMI taken from the public Systems Manager Parameter Store Type: 'AWS::SSM::Parameter::Value<String>' Default: /aws/service/ami-windows-latest/Windows_Server-2016-English-Full-Base LinuxAMIID: Description: >- The Latest Amazon Linux 2 AMI taken from the public Systems Manager Parameter Store Type: 'AWS::SSM::Parameter::Value<String>' Default: /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2Resources: WindowsInstance: Type: 'AWS::EC2::Instance' Properties: ImageId: !Ref WindowsAMIID InstanceType: !Ref InstanceType AvailabilityZone: !Ref InstanceAZ IamInstanceProfile: !Ref InstanceProfile KeyName: !Ref KeyName UserData: !Base64 'Fn::Join': - '' - - | <powershell> - | try { - >- $AWS_AVAIL_ZONE=(Invoke-WebRequest -Uri 'http://169.254.169.254/latest/meta-data/placement/availability-zone' -UseBasicParsing).Content - |- $AWS_REGION=$AWS_AVAIL_ZONE.Substring(0,$AWS_AVAIL_ZONE.length-1) - >- $AWS_INSTANCE_ID=(Invoke-WebRequest -Uri 'http://169.254.169.254/latest/meta-data/instance-id' -UseBasicParsing).Content - >- $ROOT_VOLUME_IDS=((Get-EC2Instance -Region $AWS_REGION -InstanceId $AWS_INSTANCE_ID).Instances.BlockDeviceMappings | where-object DeviceName -match '/dev/sda1').Ebs.VolumeId - |- $tag = New-Object Amazon.EC2.Model.Tag - |- $tag.key = "MyRootTag" - |- $tag.value = "MyRootVolumesValue" - > New-EC2Tag -Resource $ROOT_VOLUME_IDS -Region $AWS_REGION -Tag $tag - | } - | catch { - | Write-Output $PSItem - | } - | </powershell> Tags: - Key: Name Value: !Ref 'AWS::StackName' BlockDeviceMappings: - DeviceName: /dev/sdm Ebs: VolumeType: io1 Iops: '200' DeleteOnTermination: 'true' VolumeSize: '10' LinuxInstance: Type: 'AWS::EC2::Instance' Properties: ImageId: !Ref LinuxAMIID InstanceType: !Ref InstanceType AvailabilityZone: !Ref InstanceAZ IamInstanceProfile: !Ref InstanceProfile KeyName: !Ref KeyName UserData: !Base64 'Fn::Join': - '' - - | #!/bin/sh - > AWS_AVAIL_ZONE=$(curl http://169.254.169.254/latest/meta-data/placement/availability-zone) - | AWS_REGION=${AWS_AVAIL_ZONE::-1} - > AWS_INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id) - > ROOT_VOLUME_IDS=$(aws ec2 describe-instances --region $AWS_REGION --instance-id $AWS_INSTANCE_ID --output text --query Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId) - > aws ec2 create-tags --resources $ROOT_VOLUME_IDS --region $AWS_REGION --tags Key=MyRootTag,Value=MyRootVolumesValue Tags: - Key: Name Value: !Ref 'AWS::StackName' BlockDeviceMappings: - DeviceName: /dev/sdm Ebs: VolumeType: io1 Iops: '200' DeleteOnTermination: 'true' VolumeSize: '10' InstanceRole: Type: 'AWS::IAM::Role' Properties: AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: Service: - ec2.amazonaws.com Action: - 'sts:AssumeRole' Path: / Policies: - PolicyName: taginstancepolicy PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: - 'ec2:Describe*' Resource: '*' - Effect: Allow Action: - 'ec2:CreateTags' Resource: - !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:volume/*' - !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/*' InstanceProfile: Type: 'AWS::IAM::InstanceProfile' Properties: Path: / Roles: - !Ref InstanceRole6.    In the UserData section of the template, update --tags Key=Name,Value=newAMI to match your requirements for a Linux instance. For a Windows instance, update $tag.key="MyRootTag" and $tag.value="MyRootVolumesValue". See the following UserData section examples for Linux and Windows.Linux example:#Linux UserData UserData: Fn::Base64: !Sub | #!/bin/bash AWS_AVAIL_ZONE=$(curl http://169.254.169.254/latest/meta-data/placement/availability-zone) AWS_REGION="`echo \"$AWS_AVAIL_ZONE\" | sed 's/[a-z]$//'`" AWS_INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id) ROOT_VOLUME_IDS=$(aws ec2 describe-instances --region $AWS_REGION --instance-id $AWS_INSTANCE_ID --output text --query Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId) aws ec2 create-tags --resources $ROOT_VOLUME_IDS --region $AWS_REGION --tags Key=MyRootTag,Value=MyRootVolumesValueWindows example:#Windows UserData with standard Powershell commands (no AWS CLI installed) UserData: Fn::Base64: !Sub | <powershell> try { $AWS_AVAIL_ZONE=(Invoke-WebRequest -Uri 'http://169.254.169.254/latest/meta-data/placement/availability-zone' -UseBasicParsing).Content $AWS_REGION=$AWS_AVAIL_ZONE.Substring(0,$AWS_AVAIL_ZONE.length-1) $AWS_INSTANCE_ID=(Invoke-WebRequest -Uri 'http://169.254.169.254/latest/meta-data/instance-id' -UseBasicParsing).Content $ROOT_VOLUME_IDS=((Get-EC2Instance -Region $AWS_REGION -InstanceId $AWS_INSTANCE_ID).Instances.BlockDeviceMappings | where-object DeviceName -match '/dev/sda1').Ebs.VolumeId $tag = New-Object Amazon.EC2.Model.Tag $tag.key = "MyRootTag" $tag.value = "MyRootVolumesValue" New-EC2Tag -Resource $ROOT_VOLUME_IDS -Region $AWS_REGION -Tag $tag } catch { Write-Output $PSItem } </powershell>Important: To use the AWS CLI commands with UserData, you must install the AWS CLI within the Amazon Machine Image (AMI) of your EC2 instances. The AWS CLI is installed by default on all Amazon Linux AMIs. You must also attach an instance profile to your EC2 instances. The instance profile includes the permissions to call the ec2:DescribeInstances and ec2:CreateTags APIs only on EC2 volumes and instances within the AWS Region and account.7.    Choose the Create stack icon.8.    For Stack name, enter a name for your stack.9.    In the Parameters section, enter the appropriate information based on the needs of your environment, including your instance type, EC2 key pair, and AMI.10.    Choose Next.11.    In the Options section, enter the appropriate information for your stack, and then choose Next.12.    To enable the CloudFormation stack to create an IAM resource, select the "I acknowledge that AWS CloudFormation might create IAM resources" check box.13.    Choose Create.Tag the root volume of the instance1.    Open the Amazon EC2 console.2.    In the navigation pane, in the Elastic Block Store section, choose Volumes.3.    In the Filter field, enter the tag that you set in the CloudFormation stack to confirm that the volume was tagged.Follow"
https://repost.aws/knowledge-center/cloudformation-instance-tag-root-volume
What do I do if I receive an error when entering the CAPTCHA to sign in to my AWS account?
I can't complete the CAPTCHA when signing in to an existing account or when activating a new AWS account.
"I can't complete the CAPTCHA when signing in to an existing account or when activating a new AWS account.ResolutionTry the following:Use a different internet browser.If you're using a mobile device, try using a desktop browser instead.Clear your browser's cache and cookies.Wait 15 minutes, and then try to sign in again.If you're still having trouble signing in after trying these troubleshooting steps, contact AWS Support.Related informationWhat do I do if I can't access my AWS account?How can I verify my AWS account if my phone verification PIN isn't working?What do I do if I receive the error "Your account is in an invalid state" when I try to access services through my AWS account?How do I create and activate a new AWS account?Follow"
https://repost.aws/knowledge-center/captcha-error
How do I use my own Microsoft RDS CALs with AppStream 2.0?
I want to use my own Microsoft Remote Desktop Service Client Access License (RDS CAL) with Amazon AppStream 2.0. How can I do that?
"I want to use my own Microsoft Remote Desktop Service Client Access License (RDS CAL) with Amazon AppStream 2.0. How can I do that?ResolutionIf you have Microsoft Software Assurance with License Mobility, you might be able to bring your own Microsoft RDS CALs and then use them with AppStream 2.0. Users who are covered by your RDS CAL don't incur the monthly user fees. For more information about how to sign up for and complete a license verification process, and to view eligibility requirements, see License Mobility.First, sign up and complete the Microsoft license verification form to confirm that you have eligible licenses with Software Assurance. For more information, see License Mobility through Software Assurance on the Microsoft website. In the License Mobility Verification form, provide the following information about the Authorized Mobility Partner:Email Address: microsoft@amazon.comPartner Name: Amazon Web ServicesPartner Website: aws.amazon.comAfter the form is submitted, Microsoft provides confirmation to you and to Amazon Web Services (AWS) that you have completed the verification process. After the verification process is complete, submit the AWS License Mobility Agreement verification form.Next, create a case in the AWS Support Center. Follow these steps:Go to the AWS Support Center, and then choose Create case.Choose Account and billing support.For Type, choose Billing.For Category, choose Other Billing Questions.For Case description, enter the following template, and include your details for each line:I want to use my RDS CAL with AppStream 2.0. Please inform the AppStream 2.0 team that our License Mobility Agreement verification form is submitted. The AppStream 2.0 team requested the following information:AWS account ID:Number of licenses ported:AWS Region in which we will use the ported license:Microsoft agreement #:Date and time zone when the Microsoft agreement expires:Related informationAmazon AppStream 2.0 FAQsAmazon Appstream 2.0 pricingFollow"
https://repost.aws/knowledge-center/appstream2-rds-cal
Why is my AWS Glue ETL job reprocessing data even when job bookmarks are enabled?
"I enabled job bookmarks for my AWS Glue job, but the job is still reprocessing data."
"I enabled job bookmarks for my AWS Glue job, but the job is still reprocessing data.ResolutionHere are some common reasons why an extract, transform, and load (ETL) job might reprocess data even though job bookmarks are enabled:You have multiple concurrent jobs with job bookmarks, and the max concurrency isn't set to 1.The job.init() object is missing.The job.commit() object is missing.The transformation_ctx parameter is missing.The table's primary keys aren't in sequential order (JDBC connections only).The source data was modified after your last job run.For more information about each of these issues, see Error: A job is reprocessing data when job bookmarks are enabled.Related informationTracking processed data using job bookmarksFollow"
https://repost.aws/knowledge-center/glue-reprocess-data-job-bookmarks-enabled
How can I decrease the total provisioned storage size of my Amazon RDS DB instance?
I want to decrease the total allocated storage size of my Amazon Relational Database Service (Amazon RDS) DB instance. How can I do this?
"I want to decrease the total allocated storage size of my Amazon Relational Database Service (Amazon RDS) DB instance. How can I do this?Short descriptionAfter you create an Amazon RDS DB instance, you can't modify the allocated storage size of the DB instance to decrease the total storage space it uses. To decrease the storage size of your DB instance, create a new DB instance that has less provisioned storage size. Then, migrate your data into the new DB instance using one of the following methods:Use the database engine's native dump and restore method. This method causes some downtime.Use AWS Database Migration Service (AWS DMS) for minimal downtime.ResolutionDB dump and restoreOpen the Amazon RDS console, and then choose Databases from the navigation pane.Choose Create database.Launch a new Amazon RDS DB instance that has a smaller storage size than your existing DB instance.Use your database engine's native tools to dump your existing DB instance (the instance you want to decrease in size).Optionally, you can rename your old DB instance, and then name the new DB instance using the old DB instance's name. Or, you can reconfigure applications to use the new DB instance's name.Restore the database in your new DB instance.To restore your database, you can use the pg_dump utility for PostgreSQL or for PostgreSQL versions 10.10 and later, and 11.5. Or, you can use Transportable Databases, which moves data much faster than the pg_dump/pg_restore method. The mysqldump utility is available for importing data into MySQL/MariaDB engines, or you can use the external replication method for reduced downtime. Similarly, you can use Data Pump for Oracle and native full backup (.bak files) for SQL Server.Note: Downtime occurs from the time that your old DB instance stops receiving connections until the time that Amazon RDS directs the connections from your application to the new DB instance.Replication with AWS DMSYou can use AWS DMS to set up homogeneous replication between your two DB instances. For more information, see Getting started with AWS Database Migration Service.Related informationSources for AWS Database Migration ServiceTargets for AWS Database Migration ServiceRestoring from a DB snapshotAmazon RDS pricingFollow"
https://repost.aws/knowledge-center/rds-db-storage-size
How do I get technical support from AWS?
"I am an AWS customer, and I am looking for technical support."
"I am an AWS customer, and I am looking for technical support.ResolutionFor technical support, all AWS customers have access to AWS documentation, the AWS Knowledge Center, AWS Prescriptive Guidance , and AWS re:Post.You can also subscribe to a Developer, Business, Enterprise On-Ramp, or Enterprise Support plan to receive one-on-one fast-response support from experienced technical support engineers. With these Support plans, you get pay-by-the-month pricing and unlimited support cases. If you have operational issues or technical questions, you can contact a team of support engineers and receive predictable response times and personalized support.After you sign up for a Developer, Business, Enterprise On-Ramp, or Enterprise Support plan, you can open a technical support case by doing the following:Open the AWS Support Center.Choose Create case.On the Create case page, select Technical support.Enter the required information.Choose Submit.To learn more about the types of technical issues that are supported by AWS, see Scope of AWS Support.To get personalized technical support, you must sign up for a Developer, Business, Enterprise On-Ramp, or Enterprise Support plan. All AWS customers receive support for account and billing questions and service quota increases.If you have a Basic Support plan and require one-on-one technical support, you can upgrade your Support plan. For more information, see How do I change my AWS Support plan?Related informationCompare AWS Support plansGetting started with AWS SupportAWS Support plan pricingDoes AWS Support have a phone number?Follow"
https://repost.aws/knowledge-center/get-aws-technical-support
How do I manage and view the Systems Manager patch and association compliance data for all my accounts using QuickSight?
I want to use Amazon QuickSight to manage and view compliance data for AWS Systems Manager.
"I want to use Amazon QuickSight to manage and view compliance data for AWS Systems Manager.ResolutionWith Amazon QuickSight, you can query, analyze, and visualize Systems Manager Inventory data. You can also publish interactive dashboards. You can use Amazon QuickSight with Amazon Athena table dataset to create dashboards and widgets for displaying compliance information.Check the prerequisitesSet up Systems Manager Inventory and resource data sync in your account. This setup allows the following:Systems Manager gathers the inventory information.Resource data sync synchronizes the inventory to an Amazon Simple Storage Service (Amazon S3) bucket.You can create this setup for multi-account multi-Region use cases and synchronize the data to a central S3 bucket. Amazon Athena integration uses resource data sync to view the inventory data from all managed nodes in the inventory data Detailed view page. For more information, see Querying inventory data from multiple Regions and accounts.After setting up Systems Manager Inventory, resource data sync, and Athena access configuration, you can proceed to set up your QuickSight account.Set up a QuickSight accountIf you don't have an Amazon QuickSight account, log in to your AWS Management Console with the AWS Identity and Access Management (IAM) user or role that has appropriate QuickSight permissions. Go to Amazon QuickSight to create a new account.Choose Enterprise or Enterprise + Q.-or-Scroll down, and then choose Sign up for Standard Edition.Choose the appropriate IAM identity.Under Quicksight access to AWS services, select Amazon Athena and Amazon S3.For Select Amazon S3 buckets, select the target S3 bucket where inventory data is stored. Select Write permission for Athena Workgroup against the selected S3 bucket to give write permission for Athena Workgroup.Choose Finish.If you have an existing QuickSight account, do the following in your QuickSight profile:Choose the user profile, and then choose Manage QuickSight.Choose Security & permissions.Choose Add or remove under QuickSight access to AWS services.To allow Athena and S3 permissions, follow steps 2 and 3 from the previous section.Create a dataset in QuickSightYou can create datasets in QuickSight using Athena tables as the source. An AWS Glue crawler crawls the inventory data in the S3 bucket and updates the tables in the AWS Glue Data Catalog. These tables are then made available in Athena by the AWS Glue crawler. Each inventory metadata has a corresponding Athena table that's created by AWS Glue. To create the dataset and analyze the data, use the aws_compliancesummary and aws_complianceitem tables:On the QuickSight start page, choose Datasets from the navigation pane, and then choose New dataset.Under Create a Dataset, select Athena as the data source.Enter the data source name.Choose Create data source.From the dropdown list of databases, select the S3 bucket with inventory data.The database name is in the format S3_bucket_name-<region>-database.Select aws_compliancesummary from the list of tables, and then choose Select.Select Directly query your data.Choose Edit/Preview data.Choose Save and publish.Use the preceding instructions to create another dataset for the aws_complianceitem table.Analyze the datasetYou can use QuickSight Analyses to visualize and analyze data. To use the aws_compliancesummary and aws_complianceitem datasets for data analysis, do the following:On the Amazon QuickSight start page, choose New Analysis.On the Datasets page, choose the aws_compliancesummary dataset, and then choose USE IN ANALYSIS.To add multiple datasets in the same analysis, choose Edit (pencil icon) next to Dataset.In the pop-up page that appears, choose Add dataset, and then select aws_complianceitem from the list. Choose Select.From the dataset dropdown list, you can view these two datasets for Analyses.Note: You can also add multiple other datasets to the same analysis to create the visuals.Add visualsNote: The provided steps for adding visuals are examples. You can create these graphs according to your use case and requirements.You can add a visual to your Amazon QuickSight analysis based on the QuickSight dataset. The dataset includes the tables from Athena that has the Systems Manager inventory data with compliance information.Visuals for aws_compliancesummary datasetYou can add a visual for the aws_compliancesummary dataset by following the instructions in Adding a visual.You can also add filters to filter the data based on compliance type, such as patch compliance and association compliance:Choose Filter from the left navigation pane.Choose ADD FILTER, and then select Compliance type.From the list of values, select Patch to include only patch compliance.Select all applicable visuals in the Applied to- dropdown.Choose Apply.To view the count of resources based on patch compliance status, do the following:In Visual types, select Donut chart.From the Fields list, select Status to add it to Group/Color dimension.Drag and drop Resourceid under Value.To count distinct values, choose the arrow that's next to resourceid.Choose Aggregate: Count, and then choose Count distinct.Select the graph. Then, choose Format Visual icon (pencil icon).Under Data labels, select Show metric.You can see the actual values and percentages in the graph.To view the compliant instances by Region, do the following:Choose the preceding visual, and then choose the three dots on the chart.Choose Duplicate visual.Under Field wells, in the Group/Color dimension dropdown, select region.Choose Filter, choose ADD FILTER, select Status, and then select COMPLIANT.Choose Apply.You can see the graph for compliant instances in each Region.To view noncompliant instances by Region, do the following:In the preceding visual, choose the three dots on the chart.Choose Duplicate visual.Choose Filter. Choose the Status filter, and then select NON_COMPLIANT.Choose Apply.You can see the graphs for noncompliant instances in each Region.To view the account information for all accounts in a multi-account setup, use the preceding visual. Under Fields wells, in the Group/Color dimension, select accountid. You can see the graph that's based on account IDs.Visuals for aws_complianceitem datasetYou can add a visual for the aws_complianceitem dataset by following the instructions in Adding a visual.You can also add filters to filter the data based on compliance type, such as patch compliance and association compliance. To do so, use corresponding instructions from the preceding section.To view the list of missing patches by instances, do the following:In Visual types, select Pivot table.Add Region, resourceid, patchstate, id, and title under Rows and id under Values.To count distinct values, choose the arrow that's next to id.Choose Aggregate: Count, and then choose Count distinct.Choose Filter. Choose ADD FILTER, choose Patchstate, and then select Missing.Choose Apply.To view the list of instances by compliance status, do the following:Select the aws_complianceitem dataset.Choose ADD, and then choose Add visual.In Visual types, select Pivot table.Add Region, resourceid, patchstate, id, and title under Rows and id under Values.To count distinct values, choose the arrow that's next to id.Choose Aggregate: Count, and then choose Count distinct.To get information on all accounts, in the preceding visual, add accountid as the first field under Rows. This filters the pivot table based on account ID.Publish a dashboardYou can publish all the visuals that are created as a dashboard and share it with other users.After adding all the visuals, choose Themes, and then select the appropriate theme.Choose Share on the top right of the page.Select Publish Dashboard.Enter a name for the dashboard, and then choose Publish Dashboard.ConsiderationsAWS Glue crawler crawls the inventory data in the central S3 bucket twice daily by default. Therefore, data is updated based on this schedule. You can modify the frequency based on the requirement by editing the AWS Glue crawler schedule.You can create a joined dataset in QuickSight to join multiple Athena tables to create a merged dataset. The use case for this scenario is to create a joined dataset with aws_compliancesummary and aws_instanceinformation tables to visualize the data based on Platform (Linux/Windows). The Platform information is captured only in the aws_instanceinformation table. Also, you can use this information to filter out data on terminated instances. This data is saved for 30 days in Systems Manager Inventory. For more information, see Joining data.Follow"
https://repost.aws/knowledge-center/systems-manager-compliance-data-quicksight