Question
stringlengths
0
222
Description
stringlengths
0
790
Answer
stringlengths
0
28.2k
Link
stringlengths
35
92
Why is my Amazon RDS for MySQL or MariaDB instance showing as storage full?
My Amazon Relational Database Service (Amazon RDS) for MySQL or MariaDB instance is showing as storage full. Why is this happening and how do I view what is using storage in my DB instance?
"My Amazon Relational Database Service (Amazon RDS) for MySQL or MariaDB instance is showing as storage full. Why is this happening and how do I view what is using storage in my DB instance?Short descriptionTo troubleshoot a storage full issue, you must first analyze the total space used on your DB instance. Space on your DB instance is used for the following:User-created databasesTemporary tablesBinary logs or MySQL standby instance relay logs (if you use a read replica)InnoDB tablespaceGeneral logs, slow query logs, and error logsAfter you identify what's using storage space, you can reclaim storage space. Then, monitor the FreeStorageSpace metric to avoid running out of space again.Note: If there's a sudden decrease in available storage, check ongoing queries at the DB instance level by running the SHOW FULL PROCESSLIST command. The SHOW FULL PROCESSLIST command provides information about all active connections and queries that are performed by each connection. To review the transactions that have been active for a long time, run the INFORMATION_SCHEMA.INNODB_TRX or SHOW ENGINE INNODB STATUS command. Then, review the output.ResolutionAnalyze the total space used on the DB instance (user-created databases)To find the size of each user-created database, run the following query:mysql> SELECT table_schema, ROUND(SUM(data_length+index_length)/1024/1024/1024,2) "size in GB" FROM information_schema.tables GROUP BY 1 ORDER BY 2 DESC;To check the size of each table for a particular database (in your DB instance), run the following query:mysql> SELECT table_schema "DB Name", table_name,(data_length + index_length)/1024/1024/1024 AS "TableSizeinGB" from information_schema.tables where table_schema='database_name';To get more accurate tables sizes in MySQL version 5.7 and higher, or MySQL 8.0 and higher, use the following query:Note: The information_schema.files query is not applicable to MariaDB engines.mysql> SELECT file_name, ROUND(SUM(total_extents * extent_size)/1024/1024/1024,2) AS "TableSizeinGB" from information_schema.files where file_name like '%/database_name/%';To obtain complete storage details and approximate fragmented space at the database level and table level, run the following query:Note: This query is not applicable to tables residing in shared tablespace.mysql> SELECT table_schema AS "DB_NAME", SUM(size) "DB_SIZE", SUM(fragmented_space) APPROXIMATED_FRAGMENTED_SPACE_GB FROM (SELECT table_schema, table_name, ROUND((data_length+index_length+data_free)/1024/1024/1024,2) AS size, ROUND((data_length - (AVG_ROW_LENGTH*TABLE_ROWS))/1024/1024/1024,2) AS fragmented_space FROM information_schema.tables WHERE table_type='BASE TABLE' AND table_schema NOT IN ('performance_schema', 'mysql', 'information_schema') ) AS TEMP GROUP BY DB_NAME ORDER BY APPROXIMATED_FRAGMENTED_SPACE_GB DESC;mysql> SELECT table_schema DB_NAME, table_name TABLE_NAME, ROUND((data_length+index_length+data_free)/1024/1024/1024,2) SIZE_GB, ROUND((data_length - (AVG_ROW_LENGTH*TABLE_ROWS))/1024/1024/1024,2) APPROXIMATED_FRAGMENTED_SPACE_GB from information_schema.tables WHERE table_type='BASE TABLE' AND table_schema NOT IN ('performance_schema', 'mysql', 'information_schema') ORDER BY APPROXIMATED_FRAGMENTED_SPACE_GB DESC;Record the database sizes acquired from these two queries and compare them to the Amazon CloudWatch metrics in Amazon RDS. You can then confirm whether the full storage is caused because of data usage.Temporary tablesInnoDB user-created temporary tables and on-disk internal temporary tables are created in a temporary tablespace file named ibtmp1. Sometimes, the temporary tablespace file can even extend to ibtmp2 in the MySQL data directory.Tip: If the temporary table (ibtmp1) uses excessive storage, reboot the DB instance to release the space.The online DDL operations use temporary log files for the following:Recording concurrent DMLCreating temporary sort files when an index is createdCreating temporary intermediate tables files when tables are rebuilt (so that temporary tables can occupy storage)Note: File sizes of the InnoDB tablespace can be queried only using MySQL version 5.7 and higher, or MySQL 8.0 and higher.To find the InnoDB temporary tablespace, run the following query:mysql> SELECT file_name, tablespace_name, table_name, engine, index_length, total_extents, extent_size from information_schema.files WHERE file_name LIKE '%ibtmp%';To reclaim disk space that's occupied by a global temporary tablespace data file, restart the MySQL server or reboot your DB instance. For more information, see The temporary tablespace on the MySQL website.InnoDB table spaceSometimes MySQL will create internal temporary tables that can't be removed because a query is intervening. These temporary tables aren't part of the table named "tables" inside information_schema. For more information, see Internal temporary table use in MySQL on the MySQL website.Run the following query to find these internal temporary tables:mysql> SELECT * FROM information_schema.innodb_sys_tables WHERE name LIKE '%#%';The InnoDB system tablespace is the storage area for the InnoDB data dictionary. Along with the data dictionary, the doublewrite buffer, change buffer, and undo logs are also present in the InnoDB system tablespace. Additionally, the tablespace might contain index and table data if tables are created in the system tablespace (instead of file-per-table or general tablespaces).Run the following query to find the InnoDB system tablespace:mysql> SELECT file_name, tablespace_name, table_name, engine, index_length, total_extents, extent_size from information_schema.files WHERE file_name LIKE '%ibdata%';Note: This query runs on MySQL version 5.7 and higher, or MySQL 8.0 and higher.After the size of your system table space is increased, you can't reduce it. However, you can dump all of your InnoDB tables and import the tables into a new MySQL DB instance. To avoid large system tablespaces, consider using file-per-table tablespaces. For more information, see File-per-table tablespaces on the MySQL website.If you do enable Innodb_file_per_table, then each table will store the data and index in its own tablespace file. You can reclaim the space (from fragmentation on databases and tables) by running OPTIMIZE TABLE on that table. The OPTIMIZE TABLE command creates a new empty copy of your table. Then, data from the old table is copied row by row to the new table. During this process, a new .ibd tablespace is created and space is reclaimed. For more information about this process, see OPTIMIZE TABLE statement on the MySQL website.Important: The OPTIMIZE TABLE command uses the COPY algorithm to create temporary tables that are the same size as the original table. Confirm that you have enough available disk space before running this command.To optimize your table, run the following command syntax:mysql> OPTIMIZE TABLE <tablename>;Or, you can rebuild the table by running the following command:mysql> ALTER TABLE <table_name> ENGINE=INNODB;Binary logsIf you activate automated backups on your Amazon RDS instance, the binary logs are also automatically activated on your DB instance. These binary logs are stored on the disk and consume storage space, but are purged at every binary log retention configuration. The default binlog retention value for your instance is also set to "Null", which means that the file is removed immediately.To avoid low storage space issues, set the appropriate binary log retention period in Amazon RDS for MySQL. You can review the number of hours that a binary log is retained with the mysql.rds_show_configuration command syntax:CALL mysql.rds_show_configuration;You can also reduce this value to retain logs for a shorter period to reduce the amount of space the logs use. A value of NULL means that logs are purged as soon as possible. If there's a standby instance for the active instance, then monitor the ReplicaLag metric on the standby instance. The ReplicaLag metric indicates any delays that occur during the binary log processing on the active instance or relay logs on the standby instance.If there's a standby instance for the active instance, then monitor the ReplicaLag metric on the standby instance. The ReplicaLag metric indicates any delays during the binary log purge on the active instance and relay log on the standby instance. If there are purging or replication issues, then these binary logs can accumulate over time, consuming additional disk space. To check the number of binary logs on an instance and file size, use the SHOW BINARY LOGS command. For more information, see SHOW BINARY LOGS statement on the MySQL website.If the DB instance is acting as a replication standby instance, then check the size of the relay logs (Relay_Log_Space) value using the following command:SHOW SLAVE STATUS\GMySQL logs (general logs, slow query logs, and error logs)Amazon RDS for MySQL provides logs (such as general logs, slow query logs and error logs) that can be used to monitor your database. Error logs are active by default. However, the general logs and slow query logs can be activated using a custom parameter group on the RDS instance. After the slow query logs and general logs are activated, they are automatically stored in the slow_log and general_log tables inside the MySQL database. To check the sizes of any slow queries, general logs (of "FILE" type), and error logs, view and list the database log files.If the slow query log and general log tables are using excessive storage, thenmanage the table-based MySQL logs by manually rotating the log tables. To completely remove the old data and reclaim your disk space, call the following commands twice in succession:mysql> CALL mysql.rds_rotate_slow_log;mysql> CALL mysql.rds_rotate_general_log;Note: The tables don't provide an accurate file size of the logs. Modify the parameter so that the value of log_output for slow_log and general_log is "File" instead of "Table".It's also a best practice to monitor your Amazon RDS DB instance using Amazon CloudWatch. You can set up CloudWatch alarms on the FreeStorageSpace metric to receive alerts whenever your storage space drops below a certain threshold value. Finally, monitor the FreeStorageSpace metric by setting up a CloudWatch alarm to receive notifications whenever your DB instance is low on free space. For more information, see How can I create CloudWatch alarms to monitor the Amazon RDS free storage space and prevent storage full issues?Also, you can use the Amazon RDS storage autoscaling feature to manage capacity automatically. With storage autoscaling, you don't have to manually scale up database storage. For more information about Amazon RDS storage autoscaling, see Working with storage for Amazon RDS DB instances.Related informationHow do I resolve problems with my Amazon RDS for MySQL DB instance that's using more storage than expected?Follow"
https://repost.aws/knowledge-center/view-storage-rds-mysql-mariadb
Why isn't my Lambda function with an Amazon SQS event source scaling optimally?
"When I use an Amazon Simple Queue Service (Amazon SQS) queue as an event source, I want my AWS Lambda function to have optimal concurrency."
"When I use an Amazon Simple Queue Service (Amazon SQS) queue as an event source, I want my AWS Lambda function to have optimal concurrency.ResolutionNote: When you configure an Amazon SQS queue as an event source, Lambda functions can optimally scale up to 60 more instances per minute. The maximum number of concurrent invocations is 1,000. If you use FIFO event source mapping, then functions can scale in concurrency to the number of active message groups. For more information see, Scaling and processing.Identify and resolve any Lambda function invocation errorsTo prevent errors at scale, Lambda throttles function scaling when invocation errors occur. When the errors are resolved, Lambda continues to scale the function. For more information, see Backoff strategy for failed invocations.For best practices on how to resolve Lambda function invocation errors, see Troubleshooting issues in Lambda and How do I troubleshoot Lambda function failures?Configure your Lambda function with optimal concurrency for your use caseReserved concurrencyIf you configured reserved concurrency on your function, then Lambda throttles your function when it reaches the reserved value. Make sure that the amount of concurrency that's reserved for your function has at least the following values:For standard Amazon SQS queues: 1,000For FIFO queues: The number of active message groupsUnreserved concurrencyIf you don't configure reserved concurrency on your function, then your function has a default unreserved concurrency quota of 1,000. This default quota applies to other functions in the same AWS account and AWS Region. If you have at least 1,000 unreserved concurrency available in your function's Region, then the function scales until it reaches the maximum available concurrency. When all of your account's concurrency is in use, Lambda throttles invocations.Note: Lambda functions initially scale as per burst capacity.If your expected traffic arrives quicker than the default burst capacity, then you can use Provisioned Concurrency to make sure that your function is available. With Provisioned Concurrency, the function continues to scale to a predefined value. After the provisioned concurrency is fully used, the function uses the unreserved concurrency pool of the account.Important: To scale up additional concurrent invocations, your account must not be near the service quota for scaling or burst concurrency in the Region. If you need a higher concurrency for a Region, then request a service quota increase in the Service Quotas console.Confirm that there are enough messages in your Amazon SQS queue to allow your Lambda function to scaleIf an Amazon SQS queue is configured to invoke a Lambda function, then Lambda will scale invocations only if there are messages in the queue.To check how many messages in your Amazon SQS queue still need to be processed, review your ApproximateNumberOfMessagesVisible metric.If the metric is low or at 0, then your function can't scale.If the metric is high and there are no invocation errors, then try increasing the batch size on your event notification. Increase the batch size until the duration metric increases faster than the batch size metric. For more information, see Monitoring functions on the Lambda console.Note: The maximum batch size for a standard Amazon SQS queue is 10,000 records. For FIFO queues, the maximum batch size is 10 records. For more information, see ReceiveMessage in the Amazon SQS API Reference.Related informationUsing Lambda with Amazon SQSManaging AWS Lambda function concurrencyConfiguring maximum concurrency for Amazon SQS event sourcesFollow"
https://repost.aws/knowledge-center/lambda-sqs-scaling
How do I install and activate the latest ENA driver for enhanced network support on an Amazon EC2 instance running Red Hat 6/7?
How do I install and activate the latest Elastic Network Adapter (ENA) driver for enhanced network support on an Amazon Elastic Compute Cloud (Amazon EC2) instance running RHEL 6 or 7?
"How do I install and activate the latest Elastic Network Adapter (ENA) driver for enhanced network support on an Amazon Elastic Compute Cloud (Amazon EC2) instance running RHEL 6 or 7?Short descriptionSome earlier versions of the Red Hat Enterprise Linux operating system don't include an ENA driver. For Nitro instances, the ENA driver is required to change your EC2 instance type for network connectivity.Note: It's a best practice to create a snapshot of your instance before proceeding with the following resolution.ResolutionRHEL 7.4 and laterRHEL 7.4 and later AMIs come preinstalled with the module needed for enhanced networking with ENA. For more information, see Enable enhanced networking with the Elastic Network Adapter (ENA) on Linux instances.RHEL 7 lower than 7.41.    Run the following command to upgrade the kernel to the latest version:sudo yum upgrade kernel -y2.    Stop the instance.Note: Data in instance store volumes is lost when an instance is stopped. For more information, see Determine the root device type of your instance. Be sure that you back up any data that you want to keep on an instance store volume.3.    Run the following AWS Command Line Interface (AWS CLI) command:aws ec2 modify-instance-attribute --instance-id i-xxxxxxxxxxxxxxxxx --ena-support --region xx-xxxxx-xNote: If the AWS CLI isn't installed on your instance, you can install and configure it. If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.4.    Start the instance.5.    Validate that the ENA driver loaded on the instance using the following command. Replace eth0 with the name of the interface that you want to check. The default name for a single interface is eth0. If your operating system uses predictable network names, the network name might be different.$ ethtool -i eth0RHEL 6Note: RHEL 6 doesn't come with Amazon EC2 production-ready NVMe drivers and you can't upgrade to NVME drivers separately. If you want to use a Nitro-based, or any instance type with NVMe instance store volumes, upgrade to RHEL 7.4 or higher.Download and install the ENA driver1.    Update the kernel and reboot the system so that the latest kernel takes effect:sudo yum upgrade kernel -y && sudo reboot2.    Install the development package for building kernel modules to match the kernel:sudo yum install kernel-devel-$(uname -r) gcc git patch rpm-build wget -ycd /usr/src/sudo wget https://github.com/amzn/amzn-drivers/archive/master.zipsudo unzip master.zipcd amzn-drivers-master/kernel/linux/enasudo make3.    Copy the module to the modules directory:sudo cp ena.ko /lib/modules/$(uname -r)/4.    Regenerate the kernel module dependency map files:sudo depmod5.    Use the modinfo command to confirm that the ENA module is present:modinfo enaThe modinfo command output shows the ENA driver information.Note: The ENA driver version might be newer than 2.2.11g while you compile and install it on your system.filename: /lib/modules/2.6.32-754.33.1.el6.x86_64/ena.koversion: 2.2.11glicense: GPLdescription: Elastic Network Adapter (ENA)author: Amazon.com, Inc. or its affiliatesretpoline: Ysrcversion: 17C7CD1CEAD3F0ADB3A5E5Ealias: pci:v00001D0Fd0000EC21sv*sd*bc*sc*i*alias: pci:v00001D0Fd0000EC20sv*sd*bc*sc*i*alias: pci:v00001D0Fd00001EC2sv*sd*bc*sc*i*alias: pci:v00001D0Fd00000EC2sv*sd*bc*sc*i*alias: pci:v00001D0Fd00000051sv*sd*bc*sc*i*depends: vermagic: 2.6.32-754.33.1.el6.x86_64 SMP mod_unload modversions parm: debug:Debug level (0=none,...,16=all) (int)parm: rx_queue_size:Rx queue size. The size should be a power of 2. Max value is 8K (int)parm: force_large_llq_header:Increases maximum supported header size in LLQ mode to 224 bytes, while reducing the maximum TX queue size by half. (int)parm: num_io_queues:Sets number of RX/TX queues to allocate to device. The maximum value depends on the device and number of online CPUs. (int)6.    Append net.ifnames=0 to /boot/grub/grub.conf to turn off network interface naming:sudo sed -i '/kernel/s/$/ net.ifnames=0/' /boot/grub/grub.conf7.    Stop the instance.8.    Activate enhanced network support at the instance level. The following example modifies the instance's attribute from the AWS Command Line Interface (AWS CLI).aws ec2 modify-instance-attribute --instance-id i-xxxxxxxxxxxxxxxxx --ena-support --region xx-xxxxx-x9.    Change the instance type to one of the ENA supported instance types.10.    Start the instance, connect to the instance using SSH, and then run the ethtool command:ethtool -i eth0The output includes the ENA driver version, as shown in the following example:driver: enaversion: 2.2.11gfirmware-version: bus-info: 0000:00:05.0supports-statistics: yessupports-test: nosupports-eeprom-access: nosupports-register-dump: nosupports-priv-flags: noConfigure the Dynamic Kernel Module Support (DKMS) program to make sure that the driver is included during future kernel upgradesKeep the following in mind:Software from the EPEL repository is not supported by Red Hat or AWS.Using DKMS voids the support agreement for your subscription.1.    Install the following Red Hat Package Manager ( rpm) file:sudo yum install https://archives.fedoraproject.org/pub/archive/epel/6/x86_64/epel-release-6-8.noarch.rpm -yNote: For a list of the most recent .rpm packages, refer to the EPEL - Fedora Project Wiki website.2.    Run the install command:sudo yum install dkms -y3.    Detect the current version:VER=$( grep ^VERSION /usr/src/amzn-drivers-master/kernel/linux/rpm/Makefile | cut -d' ' -f2 )4.    Copy the source files into the source directory:sudo cp -a /usr/src/amzn-drivers-master /usr/src/amzn-drivers-${VER}5.    Generate the DKMS config file and build and install the ENA module:sudo cat <<EOM | sudo tee /usr/src/amzn-drivers-${VER}/dkms.confPACKAGE_NAME="ena"PACKAGE_VERSION="$VER"CLEAN="make -C kernel/linux/ena clean"MAKE="make -C kernel/linux/ena/ BUILD_KERNEL=\${kernelver}"BUILT_MODULE_NAME[0]="ena"BUILT_MODULE_LOCATION="kernel/linux/ena"DEST_MODULE_LOCATION[0]="/updates"DEST_MODULE_NAME[0]="ena"AUTOINSTALL="yes"EOMsudo dkms add -m amzn-drivers -v $VERsudo dkms build -m amzn-drivers -v $VERsudo dkms install -m amzn-drivers -v $VERFollow"
https://repost.aws/knowledge-center/install-ena-driver-rhel-ec2
Why can't I delete my requester-managed VPC endpoint?
Why can't I delete my requester-managed Amazon Virtual Private Cloud (Amazon VPC) endpoint?
"Why can't I delete my requester-managed Amazon Virtual Private Cloud (Amazon VPC) endpoint?Short descriptionWhen deleting an interface VPC endpoint, you might receive the following error:vpce-0399e6e9fd2f4e430: Operation is not allowed for requester-managed VPC endpoints for the service com.amazonaws.vpce.region.vpce-svc-04c257ad126576358This error occurs when the endpoint being deleted is a requester-managed VPC endpoint. Requester-managed endpoints are created by any of the AWS-managed services (for example, Amazon Aurora Serverless). To delete this type of endpoint, you must determine the AWS-managed service that created the endpoint. After identifying the service, you must first delete that resource before you can delete the endpoint.ResolutionTo verify which AWS-managed service created an endpoint, do the following:If the endpoint was created within 90 daysIf the endpoint was created within 90 days of when you are trying to delete it, use AWS CloudTrail to determine which service created it. Make sure to set the CloudTrail console view to the last 90 days of recorded API activity (management events).To view CloudTrail events, do the following:1.    Open the CloudTrail console.2.    In the navigation pane, choose Event history.3.    From dropdown list select the Resource name, and then add the VPC endpoint ID (for example vpce-xxxxxx) in the filter.4.    Look for the CreateVpcEndpoint API call and check the username. For endpoints created by Aurora Serverless the username displays as RDSAuroraServeless. For endpoints created by Amazon Relational Database Service (Amazon RDS) Proxy, the username displays as RDSSlrAssumptionSession. To identify the endpoints created by AWS Network Firewall, view the event record for the CreateVpcEndpoint API call and check for tags with the key value of Firewall and AWSNetworkFirewallManaged:"Tag": [ { "Value": ""arn:aws:network-firewall:<region>:<account number>:firewall/<firewall name>", "tag": 1, "Key": "Firewall" }, { "Value": true, "tag": 2, "Key": "AWSNetworkFirewallManaged" }If the endpoint is older than 90 daysTo determine if AWS Network Firewall created the endpoint:1.    Open the VPC console, and then select Endpoints.2.    Select the endpoint and then select Tags.3.    Check for the following:The Key is AWSNetworkFirewallManaged and the Value is True.The Key is Firewall and the Value is your Network Firewall ARN arn:aws:network-firewall:region:account number:firewall/firewall name.You can also view endpoints created by AWS Network Firewall by doing the following:1.    Open the VPC console, and then select Firewalls.2.    Select Firewall details.To determine if Aurora Serverless created the endpoint:If the requestor-managed interface endpoint is created by Aurora Serverless after 90 days, perform a name lookup for the existing Aurora Serverless databases' endpoint. This returns the CNAME as the VPC interface endpoint DNS name. You can use this to confirm if the endpoint was created by Aurora Serverless.For example, you have an interface VPC endpoint with the ID vpce-0013b47d434ae7786 that you can't delete. To verify whether Aurora Serverless created the endpoint, do the following:1.    Perform a name lookup on the Aurora Serverless endpoint:dig test1.proxy-chnis5vssnuj.us-east-1.rds.amazonaws.com +shortvpce-0ce9fdcdd4aa4097e-1hbywnw6.vpce-svc-0b2f119acb23c050e.us-east-1.vpce.amazonaws.com.172.31.4.218172.31.21.822.    Check the CNAME value of the record matching the DNS name of the endpoint that you're trying to delete. This confirms that this endpoint was created by Aurora Serverless.Note: To verify the DNS name of the endpoint, do the following:1.    Open the VPC console and then select Endpoints.2.    Select the Details tab and view the listed DNS names.To determine if RDS Proxy created the endpoint:Complete the preceding steps provided for Aurora Serverless. If there are multiple RDS Proxy and Aurora Serverless endpoints, repeat the steps for each endpoint.To determine if it is Redshift-managed VPC endpoint:1.    Open the Amazon Redshift console, and then choose Configurations.2.    Check if there are any endpoints configured under Redshift-managed VPC endpoints.Delete the serviceAfter identifying the service that created the endpoint, delete the service (and the corresponding endpoint).Delete the network firewall to delete the endpoint created by network firewall.Delete the Aurora Serverless DB cluster.Delete the RDS Proxy to delete the endpoint created by RDS Proxy.Delete the Redshift-managed VPC endpoints using the Redshift console or use the delete-endpoint-access AWS Command Line Interface (AWS CLI) command.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Follow"
https://repost.aws/knowledge-center/vpc-delete-requester-managed-endpoint
I can't sign in because my credentials don't work
"I tried to sign in, but the credentials I used didn't work."
"I tried to sign in, but the credentials I used didn't work.ResolutionThe following scenarios often cause problems with account credentials:You forgot the credentials for the account.You accidentally use the incorrect credentials to sign in. For example, you use the credentials for the wrong account or AWS Identity and Access Management (IAM) identity.An authorized user changed the credentials. For example, your account administrator updated your IAM credentials.Recovering a forgotten passwordTo recover a password, you must know the email address or account number that's associated with the account. You must also have access to the email address to receive an email with instructions on how to reset the password.For instructions, see How do I recover a lost or forgotten AWS password?Finding the email address for an accountCheck any email addresses you might have used to open an AWS account. Most account-related correspondence from AWS comes from no-reply-aws@amazon.com. If you find correspondence like this, then the email address is probably associated with an AWS account.Ask other members of your team, organization, or family. If someone you know created the account, then they can help you get access.Recovering IAM credentialsContact your account administrator. Your account administrator sets the credentials for each IAM entity on the account.Some IAM identities can update their own passwords. For more information, see How an IAM user changes their own password.Note: AWS Support can't discuss the details of any AWS account other than the account that you're signed in to. AWS Support can't change the credentials associated with an account for any reason.For more information, see Troubleshooting sign-in issues.Receiving account supportIf the previous methods don't work and you still can't access your account, then contact AWS Support with the Amazon Web Services Support form.Related informationWhat do I do if I receive an error when entering the CAPTCHA to sign in to my AWS account?AWS security credentialsManaging user passwords in AWSHow do I remove a lost or broken MFA device from my AWS account?Follow"
https://repost.aws/knowledge-center/forgot-aws-sign-in-credentials
How do I access resources in another AWS account using AWS IAM?
I want to assume an AWS Identity and Access Management (IAM) role in another AWS account. How do I set up cross-account access using IAM?
"I want to assume an AWS Identity and Access Management (IAM) role in another AWS account. How do I set up cross-account access using IAM?Short descriptionYou can set up a trust relationship with an IAM role in another AWS account to access their resources. For example, you want to access the destination account from the source account. To do this, assume the IAM role from the source to destination account by providing your IAM user permission for the AssumeRole API. You must specify your IAM user in the trust relationship of the destination IAM role.Note: You can also assume a role from source IAM role to destination IAM role, instead of using user to role with role chaining. Role chaining works only for programmatic access such as the AWS Command Line Interface (AWS CLI) or API. Role changing can't be used with the AWS Management Console.ResolutionFollow these instructions to create an IAM permission policy for the source account, attach the policy to a user, and then create a role for the destination account.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.Source account1.    Create an IAM policy similar to the following:Note: Replace DESTINATION-ACCOUNT-ID and DESTINATION-ROLENAME with your own values.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sts:AssumeRole" ], "Resource": [ "arn:aws:iam::DESTINATION-ACCOUNT-ID:role/DESTINATION-ROLENAME" ] } ]}2.    Attach the IAM policy to your IAM user permissions.Attach the created policy to your IAM user permissions by following the steps here.Destination account1.    Create an IAM role.2.    Paste the custom trust policy similar to the following:Note: Replace SOURCE-ACCOUNT-ID and SOURCE-USERNAME with your own values.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::SOURCE-ACCOUNT-ID:user/SOURCE-USERNAME" }, "Action": "sts:AssumeRole" } ]}Note: If you don’t have access to create and edit IAM roles and users, then get assistance from the account's owner to complete the process. As a best practice, grant access to your account and resources only to the entities that you trust.You can modify this policy to allow the assumption of as many source entities to as many destination roles as needed. For example, you can change the Principal value of the destination account trust policy to "AWS": "SOURCE-ACCOUNT-ID". This allows all entities in the source account with the assume role permissions to assume the destination account role. For more information, see Specifying a principal and Creating or editing the policy.Test your accessTo test your access, follow the instructions for Switching to a role (console).-or-Follow the instructions for Switching to an IAM role (AWS CLI).For more information, see IAM tutorial: Delegate access across AWS accounts using IAM roles.Related informationHow do I assume an IAM role using the AWS CLI?I created or updated an IAM policy and received the error "Has prohibited field Principal". How can I resolve this?How can I provide cross-account access to objects that are in Amazon S3 buckets?Why did I receive an "AccessDenied" or "Invalid information" error trying to assume a cross-account IAM role?Follow"
https://repost.aws/knowledge-center/cross-account-access-iam
How do I perform native backups of an Amazon RDS DB instance that's running SQL Server?
"I want to perform a native backup of my user database in my Amazon Relational Database Service (Amazon RDS) DB instance that's running SQL Server. I need to store the backup file in Amazon Simple Storage Service (Amazon S3), or use the database backup file to restore to the same or a different Amazon RDS for SQL Server DB instance."
"I want to perform a native backup of my user database in my Amazon Relational Database Service (Amazon RDS) DB instance that's running SQL Server. I need to store the backup file in Amazon Simple Storage Service (Amazon S3), or use the database backup file to restore to the same or a different Amazon RDS for SQL Server DB instance.Short descriptionAmazon RDS supports native backup and restore for Microsoft SQL Server databases. You can create a full backup of your on-premises database and store the file in Amazon S3. You can then restore the backup file to an existing Amazon RDS DB instance that's running SQL Server. You can also restore this backup file to an on-premises server or to a different Amazon RDS DB instance that's running SQL Server.ResolutionTo set up a native backup of the SQL Server database, you need the following components:An Amazon S3 bucket to store your backup files.Note: Create the S3 bucket in the same Region as your RDS DB instance.An AWS Identity and Access Management (IAM) role to access the bucketThe SQLSERVER_BACKUP_RESTORE option added to an option group on the DB instanceOpen the Amazon RDS console, and then choose Option Groups in the navigation pane. Choose Create Group, and enter the name, description, engine, and engine version of your server. Then, choose Create.Select the option group that you created, and then choose Add Option. Choose SQLSERVER_BACKUP_RESTORE. It's a best practice to create a new IAM role and then choose Add Option, so that your IAM role has the required privileges. Choose your S3 bucket, or create a new S3 bucket. Then, choose Apply Immediately and Add Option.Associate the option group with the DB instance by choosing Databases in the navigation pane, and then choose the instance to back up. Choose Actions, and then choose Modify.Under Database Options, choose the option group that you created, and then choose Apply Immediately and Continue. Review the information, and then choose Modify DB Instance. This option group modification has no downtime because instance reboot is not required.When the status changes from modifying to available, connect to the DB instance through SQL Server Management Studio using the master user of your RDS instance. Then, choose New Query and enter one of the following SQL statements to initiate the backup of the desired database:Initiate backup for unencrypted databasesexec msdb.dbo.rds_backup_database @source_db_name='database_name', @s3_arn_to_backup_to='arn:aws:s3:::bucket_name/file_name_and_extension', @overwrite_S3_backup_file=1;Initiate backup for encrypted databasesexec msdb.dbo.rds_backup_database @source_db_name='database_name', @s3_arn_to_backup_to='arn:aws:s3:::bucket_name/file_name_and_extension', @kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id', @overwrite_S3_backup_file=1;Note: Replace database_name, bucket_name, file_name_and_extension, region, account-id, and key-id listed in these examples to match your scenario. You can use the backup file, generated in the S3 bucket, to restore the user database to a new RDS DB instance. When the rds_backup_database or rds_restore_database stored procedure is called, the task starts and outputs the information about the task.When the lifecycle status of the task is SUCCESS, the task is complete. You can then open the Amazon S3 console, choose the bucket in which you created the user database backup, and view the backup file. You can download this file, or use the user database backup file to restore to the same Amazon RDS for SQL Server DB instance or in a new RDS DB instance.Use one of the following SQL statements to restore from the backup file available in the S3 bucket:Restore unencrypted databasesexec msdb.dbo.rds_restore_database @restore_db_name='database_name', @s3_arn_to_restore_from='arn:aws:s3:::bucket_name/file_name_and_extension';Restore encrypted databasesexec msdb.dbo.rds_restore_database @restore_db_name='database_name', @s3_arn_to_restore_from='arn:aws:s3::: bucket_name/file_name_and_extension', @kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id';You can get the Task ID after you perform the backup or restore statement. Or, you can use the following script to identify all the completed and pending tasks for a particular database:exec msdb.dbo.rds_task_status @db_name='database_name'To track the status of the job, use this SQL statement:exec msdb..rds_task_status @task_id= 5For a list of potential errors and solutions, see Migrating Microsoft SQL Server Enterprise workloads to Amazon RDS.Related informationWorking with backupsBacking up and restoring Amazon RDS DB instancesImporting and exporting SQL Server databases using native backup and restoreFollow"
https://repost.aws/knowledge-center/native-backup-rds-sql-server
How do I include my dedicated IP address as part of my default IP pool on Amazon SES?
I have dedicated IP addresses on Amazon Simple Email Service (Amazon SES). How can I send my emails using a dedicated IP address from my default IP address pool?
"I have dedicated IP addresses on Amazon Simple Email Service (Amazon SES). How can I send my emails using a dedicated IP address from my default IP address pool?ResolutionIf you haven't assigned a dedicated IP address to an IP pool, then the dedicated IP address is already included in the default IP pool (named ses-default-dedicated-pool). Additionally, if you don't specify an IP pool in your configuration set, then Amazon SES uses one of the dedicated IP addresses from ses-default-dedicated-pool.To modify an existing configuration set to explicitly specify the use of ses-default-dedicated-pool, follow these steps:Open the Amazon SES console.From the navigation pane, under Email Sending, choose Configuration Sets.Choose the configuration set that you want to associate with ses-default-dedicated-pool.Choose the Sending IP Pool tab. Then, for Pool name, select ses-default-dedicated-pool.Choose Finish.Note the following important considerations for using a configuration set and dedicated IP addresses:For a configuration set to apply to emails, you must pass the name of the configuration set in the email headers. For more information, see Specifying a Configuration Set When You Send Email.It's a best practice to attach the default IP pool to the configuration set only after the dedicated IP addresses are warmed up. Amazon SES automatically warms up your dedicated IP addresses by gradually increasing the number of emails sent from the IP addresses. Or, you can also choose to disable the automatic warm-up process so that you can manually warm up your IP addresses. While the dedicated IP addresses are warming up, avoid sending emails explicitly using the default IP pool associated with a configuration set. This impacts the warm-up process and can result in ISPs throttling emails coming from those IP addresses.Related InformationAssigning an IP Pool to an Existing Configuration SetFollow"
https://repost.aws/knowledge-center/ses-dedicated-ip-default-ip-pool
How do I add Python packages with compiled binaries to my deployment package and make the package compatible with Lambda?
"I used pip to install a Python package that contains compiled code, and now my AWS Lambda function returns an "Unable to import module" error. Why is this happening, and how do I resolve the issue?"
"I used pip to install a Python package that contains compiled code, and now my AWS Lambda function returns an "Unable to import module" error. Why is this happening, and how do I resolve the issue?Short descriptionPython packages that contain compiled code (for example: NumPy and pandas) aren't always compatible with Lambda runtimes by default. If you install these packages using pip, then the packages download and compile a module-name package for the architecture of the local machine. This makes your deployment package incompatible with Lambda if you're not using a Linux operating system.To create a Lambda deployment package or layer that's compatible with Lambda Python runtimes when using pip outside of Linux operating system, run the pip install command with manylinux2014 as the value for the --platform parameter.Note: macOS --platform tags don't work. For example: The win_amd64 and macosx_10_6_intel tags won't install a deployment package that's compatible with Lambda.ResolutionNote: This example procedure shows how to install pandas for the Lambda Python 3.9 runtime that's running on x86_64 architecture.1.    Open a command prompt. Then, confirm that you're using a version of pip that's version 19.3.0 or newer by running the following pip command:pip --versionIf you're using a version of pip that's older than pip version 19.3.0, then upgrade to the latest version of pip by running the following command:python3.9 -m pip install --upgrade pip2.    Install the precompiled Python package's .whl file as a dependency in your Lambda function's project directory by running the following command:Important: Replace my-lambda-function with the name of your function's project directory.pip install \ --platform manylinux2014_x86_64 \ --target=my-lambda-function \ --implementation cp \ --python 3.9 \ --only-binary=:all: --upgrade \ pandas3.    Open your Lambda function's project directory. If you're using macOS, then run the following command:cd my-lambda-function4.    In a text editor, create a new file named lambda_function.py. Then, copy and paste the following example code into the file and save it in your Lambda function's project directory:import numpy as npimport pandas as pddef lambda_handler(event, context): df2 = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]),columns=["a", "b", "c"]) number = np.pi print(df2) print(number)5.    Create a Lambda deployment package .zip file archive that includes all of the installed libraries and source code by running the following command:zip -r ../my-deployment-package.zip .6.    Use the my-deployment-package.zip file archive to either create a new Python 3.9 Lambda function or to update an existing one. For instructions, see Deploy your .zip file to the function in the AWS Lambda Developer Guide.Note: You can use a similar procedure to create a Lambda layer that can be used across multiple functions. For example, the following command creates a new Lambda layer to install pandas for the Lambda Python 3.9 runtime, running on arm64 architecture:pip install \ --platform manylinux2014_aarch64 \ --target=./python/lib/python3.9/site-packages \ --implementation cp \ --python 3.9 \ --only-binary=:all: --upgrade \ pandasRelated informationDeploy Python Lambda functions with .zip file archivesCreating and sharing Lambda layersFollow"
https://repost.aws/knowledge-center/lambda-python-package-compatible
How do I monitor my Amazon OpenSearch Service cluster using CloudWatch alarms?
I want to monitor my Amazon OpenSearch Service cluster for stability issues. How can I effectively monitor my cluster?
"I want to monitor my Amazon OpenSearch Service cluster for stability issues. How can I effectively monitor my cluster?ResolutionImportant: Different versions of Elasticsearch use different thread pools to process calls to the _index API.Elasticsearch versions 1.5 and 2.3 use the index thread pool.Elasticsearch versions 5.x, 6.0, and 6.2 use the bulk thread pool. (Currently, the OpenSearch Service console doesn't include a graph for the bulk thread pool.)Elasticsearch versions 6.3 and later use the write thread pool.To monitor the health of your OpenSearch Service cluster, set the recommended Amazon CloudWatch alarms and the following OpenSearch Service cluster metric alarms:MasterReachableFromNodeKibanaHealthyNodesDiskQueueDepthThreadpoolIndexQueueThreadpoolSearchQueueYou can configure your OpenSearch Service metric alarms like this:MasterReachableFromNode:Statistic = MaximumValue = ‘=0’Frequency = 1 periodPeriod = 1 minuteIssue: Leader node is down.KibanaHealthyNodes:Statistic = AverageValue = ‘=0’Frequency = 1 periodPeriod = 1 minuteIssue: Indicates that the kibana index is unhealthy.DiskQueueDepth:Statistic = AverageValue = ‘>=100'Frequency = 1 periodPeriod = 5 minutesIssue: Disk Queue Depth is the number of I/O requests that are queued at a time against the storage. This could indicate a surge in requests or Amazon EBS throttling, resulting in increased latency.ThreadpoolIndexQueue and ThreadpoolSearchQueue:Statistic = MaximumValue = ‘>=20’Frequency = 1 periodPeriod = 1 minuteIssue: Indicates that there are requests getting queued up, which can be rejected. To verify the request status, check the CPU Utilization and Threadpool Index or Search rejects.To set up an Amazon CloudWatch alarm for your OpenSearch Service cluster, perform the following steps:1.    Open the Amazon CloudWatch console.2.    Go to the Alarm tab.3.    Choose Create Alarm.4.    Choose Select Metric.5.    Choose ES for your metric.6.    Select Per-Domain and Per-Client Metrics.7.    Select a metric and choose Next.8.    Configure the following settings for your Amazon CloudWatch alarm:Statistic = MaximumPeriod to 1 minuteThreshold type = StaticAlarm condition = Greater than or equal toThreshold value = 19.    Choose the Additional configuration tab.10.    Update the following configuration settings:Datapoints to alarm = Frequency stated aboveMissing data treatment = Treat missing data as ignore (maintain the alarm state)11.    Choose Next.12.    Choose the action that you want your alarm to take, and choose Next.13.    Set a name for your alarm, and then choose Next.14.    Choose Create Alarm.Note: If the alarm is triggered for CPUUtilization or JVMMemoryPressure, check your Amazon CloudWatch metrics to see if there's a spike coinciding with incoming requests. In particular, monitor these Amazon CloudWatch metrics: IndexingRate, SearchRate, and OpenSearchRequests.Related informationClusterBlockExceptionUsing Amazon CloudWatch alarmsFollow"
https://repost.aws/knowledge-center/opensearch-cloudwatch-alarms
Why did I receive a "No space left on device” or "DiskFull" error on Amazon RDS for PostgreSQL?
"I have a small Amazon Relational Database Service (Amazon RDS) for PostgreSQL database. The instance's free storage space is decreasing, and I receive the following error:"Error message: PG::DiskFull: ERROR: could not extend file "base/16394/5139755": No space left on device. HINT: Check free disk space."I want to resolve the DiskFull errors and prevent storage issues."
"I have a small Amazon Relational Database Service (Amazon RDS) for PostgreSQL database. The instance's free storage space is decreasing, and I receive the following error:"Error message: PG::DiskFull: ERROR: could not extend file "base/16394/5139755": No space left on device. HINT: Check free disk space."I want to resolve the DiskFull errors and prevent storage issues.Short descriptionAmazon RDS DB instance storage is used by the following:Temporary tables or files that are created by PostgreSQL transactionsData filesWrite ahead logs (WAL logs)Replication slotsDB logs (error files) that are retained for too longOther DB or Linux files that support the consistent state of the RDS DB instanceResolution1.    Use Amazon CloudWatch to monitor your DB storage space using the FreeStorageSpace metric. When you set a CloudWatch alarm for free storage space, you receive a notification when the space starts to decrease. If you receive an alarm, review the causes of storage issues mentioned previously.2.    If your DB instance is still consuming more storage than expected, check for the following:Size of the DB log filesPresence of temporary filesConstant increase in transaction logs disk usageReplication slot:Physical replication slots are created by cross-Region read replicas or same-Region read replicas only if they are running on PostgreSQL 14.1 and higher versionsLogical replication slots are created for a replica or subscriberBloat or improper removal of dead rowsPresence of orphaned files3.    When your workload is predictable, enable storage autoscaling for your instance. With storage autoscaling enabled, when Amazon RDS detects that you are running out of free database space, your storage is automatically scaled. Amazon RDS starts a storage modification for an autoscaling-enabled DB instance when the following factors apply:Free available space is less than 10 percent of the allocated storage.The low-storage condition lasts at least five minutes.At least six hours have passed since the last storage modification, or storage optimization has completed on the instance, whichever is longer.You can set a limit for autoscaling your DB instance by setting the maximum storage threshold. For more information, see Managing capacity automatically with Amazon RDS storage autoscaling.Check the size of the DB log filesBy default, Amazon RDS for PostgreSQL error log files have a retention value of 4,320 minutes (three days). Large log files can use more space because of higher workloads or excessive logging. You can change the retention period for system logs using the rds.log_retention_period parameter in the DB parameter group associated with your DB instance. For example, if you set the value to 1440, then logs are retained for one day. For more information, see PostgreSQL database log files.Also, you can change error reporting and logging parameters in the DB parameter group to reduce excessive logging. This in turn reduces the log file size. For more information, see Error reporting and logging.Check for temporary filesTemporary files are files that are stored per backend or session connection. These files are used as a resource pool. Review temporary files statistics by running a command similar to this:psql=> SELECT datname, temp_files AS "Temporary files",temp_bytes AS "Size of temporary files" FROM pg_stat_database ;Important: The columns temp_files and temp_bytes in view pg_stat_database are collecting statistics in aggregation (accumulative). This is by design because these counters are reset only by recovery at server start. That is, the counters are reset after an immediate shutdown, a server crash, or a point-in-time recovery (PITR). For this reason, it's a best practice to monitor the growth of these files in number and size, rather than reviewing only the output.Temporary files are created for sorts, hashes, or temporary query results. To track the creation of temporary tables or files, set log_temp_files in a custom parameter group. This parameter controls the logging of temporary file names and sizes. If you set the log_temp_files value to 0, then all temporary file information is logged. If you set the parameter to a positive value, then only files that are equal to or larger than the specified number of kilobytes are logged. The default setting is -1, which disables the logging of temporary files.You can also use an EXPLAIN ANALYZE of your query to review disk sorting. When you review the log output, you can see the size of temporary files created by your query. For more information, see the PostgreSQL documentation for Monitoring database activity.Check for a constant increase in transaction logs disk usageThe CloudWatch metric for TransactionLogsDiskUsage represents the disk space used by transaction WALs. Increases in transaction log disk usage can happen because of:High DB loads (writes and updates that generate additional WALs)Streaming read replica lag (replicas in the same Region) or read replica in storage full stateReplication slotsReplication slots can be created as part of logical decoding feature of AWS Database Migration Service (AWS DMS). For logical replication, the slot parameter rds.logical_replication is set to 1. Replication slots retain the WAL files until the files are externally consumed by a consumer. For example, they might be consumed by pg_recvlogical; extract, transform, and load (ETL) jobs; or AWS DMS.If you set the rds.logical_replication parameter value to 1, then AWS RDS sets the wal_level, max_wal_senders, max_replication_slots, and max_connections parameters. Changing these parameters can increase WAL generation. It's a best practice to set the rds.logical_replication parameter only when you are using logical slots. If this parameter is set to 1 and logical replication slots are present but there isn't a consumer for the WAL files retained by the replication slot, then then transaction logs disk usage can increase. This also results in a constant decrease in free storage space.Run this query to confirm the presence and size of replication slots:PostgreSQL v9:psql=> SELECT slot_name, pg_size_pretty(pg_xlog_location_diff(pg_current_xlog_location(),restart_lsn)) AS replicationSlotLag, active FROM pg_replication_slots ;PostgreSQL v10 and later:psql=> SELECT slot_name, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(),restart_lsn)) AS replicationSlotLag, active FROM pg_replication_slots ;After you identify the replication slot that isn't being consumed (with an active state that is False), drop the replication slot by running this query:psql=> SELECT pg_drop_replication_slot('Your_slotname_name');Note: If an AWS DMS task is the consumer and it is no longer required, then delete the task and manually drop the replication slot.Sample output:slot_name | replicationslotlag | active---------------------------------------------------------------+--------------------+--------xc36ujql35djp_00013322_907c1e0a_9f8b_4c13_89ea_ef0ea1cf143d | 129 GB | f7pajuy7htthd7sqn_00013322_a27bcebf_7d0f_4124_b336_92d0fb9f5130 | 704 MB | tzp2tkfo4ejw3dtlw_00013322_03e77862_689d_41c5_99ba_021c8a3f851a | 624 MB | tIn this example, the slot name xc36ujql35djp_00013322_907c1e0a_9f8b_4c13_89ea_ef0ea1cf143d has an active state that is False. So this slot isn't actively used, and the slot is contributing to 129 GB of transaction files.Drop the query by running the following command:psql=> SELECT pg_drop_replication_slot('xc36ujql35djp_00013322_907c1e0a_9f8b_4c13_89ea_ef0ea1cf143d');Check the status of cross-Region read replicasWhen you use cross-Region read replication, a physical replication slot is created on the primary instance. If the cross-Region read replica fails, then the storage space on the primary DB instance can be affected. This happens because the WAL files aren't replicated over to the read replica. You can use CloudWatch metrics, Oldest Replication Slot Lag, and Transaction Logs Disk Usage to determine how far behind the most lagging replica is. You can also see how much storage is used for WAL data.To check the status of cross-Region read replica, use query pg_replication_slots. For more information, see the PostgreSQL documentation for pg_replication_slots. If the active state is returned as false, then the slot is not currently used for replication.psql=> SELECT * FROM pg_replication_slots;You can also use view pg_stat_replication on the source instance to check the statistics for the replication. For more information, see the PostgreSQL documentation for pg_stat_replication.Check for bloat or improper removal of dead rows (tuples)In normal PostgreSQL operations, tuples that are deleted or made obsolete by an UPDATE aren't removed from their table. For Multi-Version Concurrency Control (MVCC) implementations, when a DELETE operation is performed the row isn't immediately removed from the data file. Instead, the row is marked as deleted by setting the xmax field in a header. Updates mark rows for deletion first, and then carry out an insert operation. This allows concurrency with minimal locking between the different transactions. As a result, different row versions are kept as part of MVCC process.If dead rows aren't cleaned up, they can stay in the data files but remain invisible to any transaction, which impacts disk space. If a table has many DELETE and UPDATE operations, then the dead tuples might use a large amount of disk space that's sometimes called "bloat" in PostgreSQL.The VACUUM operation can free the storage used by dead tuples so that it can be reused, but this doesn't release the free storage to the filesystem. Running VACUUM FULL releases the storage to the filesystem. Note, however, that during the time of the VACUUM FULL run an access exclusive lock is held on the table. This method also requires extra disk space because it writes a new copy of the table and doesn't release the old copy until the operation is complete. It's a best practice to use this method only when you must reclaim a significant amount of space from within the table. It's also a best practice to perform periodic vacuum or autovacuum operations on tables that are updated frequently. For more information, see the PostgreSQL documentation for VACUUM.To check for the estimated number of dead tuples, use the pg_stat_all_tables view. For more information, see the PostgreSQL documentation for pg_stat_all_tables view. In this example, there are 1999952 dead tuples (n_dead_tup):psql => SELECT * FROM pg_stat_all_tables WHERE relname='test';-[ RECORD 1 ]-------+------------------------------relid | 16395schemaname | publicrelname | testseq_scan | 3seq_tup_read | 5280041idx_scan | idx_tup_fetch | n_tup_ins | 2000000n_tup_upd | 0n_tup_del | 3639911n_tup_hot_upd | 0n_live_tup | 1635941n_dead_tup | 1999952n_mod_since_analyze | 3999952last_vacuum | last_autovacuum | 2018-08-16 04:49:52.399546+00last_analyze | 2018-08-09 09:44:56.208889+00last_autoanalyze | 2018-08-16 04:50:22.581935+00vacuum_count | 0autovacuum_count | 1analyze_count | 1autoanalyze_count | 1psql => VACUUM TEST;Check for orphaned filesOrphaned files can occur when the files are present in the database directory but there are no objects that point to those files. This might happen if your instance runs out of storage or the engine crashes during an operation such as ALTER TABLE, VACUUM FULL, or CLUSTER. To check for orphaned files, follow these steps:1.    Log in to PostgreSQL in each database.2.    Run these queries to assess the used and real sizes.# Size of the database occupied by filespsql=> SELECT pg_size_pretty(pg_database_size('DATABASE_NAME')); # Size of database retrieved by summing the objects (real size)psql=> SELECT pg_size_pretty(SUM(pg_relation_size(oid))) FROM pg_class;3.    Note the results. If the difference is significant, then orphaned files might be using storage space.Related informationWorking with read replicas for Amazon RDS for PostgreSQLAutomated monitoring toolsFollow"
https://repost.aws/knowledge-center/diskfull-error-rds-postgresql
How can I use AWS DMS to migrate data to Amazon S3 in Parquet format?
How can I use AWS Database Migration Service (AWS DMS) to migrate data in Apache Parquet (.parquet) format to Amazon Simple Storage Service (Amazon S3)?
"How can I use AWS Database Migration Service (AWS DMS) to migrate data in Apache Parquet (.parquet) format to Amazon Simple Storage Service (Amazon S3)?ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.You can use AWS DMS to migrate data to an S3 bucket in Apache Parquet format if you use replication 3.1.3 or a more recent version. The default Parquet version is Parquet 1.0.1.    Create a target Amazon SE endpoint from the AWS DMS Console, and then add an extra connection attribute (ECA), as follows. Also, check the other extra connection attributes that you can use for storing parquet objects in an S3 target.dataFormat=parquet;Or, create a target Amazon S3 endpoint using the create-endpoint command in the AWS Command Line Interface (AWS CLI):aws dms create-endpoint --endpoint-identifier s3-target-parque --engine-name s3 --endpoint-type target --s3-settings '{"ServiceAccessRoleArn": <IAM role ARN for S3 endpoint>, "BucketName": <S3 bucket name to migrate to>, "DataFormat": "parquet"}'2.    Use the following extra connection attribute to specify the Parquet version of output file:parquetVersion=PARQUET_2_0;3.    Run the describe-endpoints command to see if the S3 endpoint that you created has the S3 setting DataFormat or the extra connection attribute dataFormat set to "parquet". To check the S3 setting DataFormat, run a command similar to the following:aws dms describe-endpoints --filters Name=endpoint-arn,Values=<S3 target endpoint ARN> --query "Endpoints[].S3Settings.DataFormat"[ "parquet"]4.    If the value of the DataFormat parameter is CSV, then recreate the endpoint.5.    After you have the output in Parquet format, you can parse the output file by installing the Apache Parquet command line tool:pip install parquet-cli --user6.    Then, inspect the file format:parq LOAD00000001.parquet # Metadata <pyarrow._parquet.FileMetaData object at 0x10e948aa0> created_by: AWS num_columns: 2 num_rows: 2 num_row_groups: 1 format_version: 1.0 serialized_size: 1697.    Finally, print the file content:parq LOAD00000001.parquet --head i c0 1 insert11 2 insert2Related informationUsing Amazon S3 as a target for AWS Database Migration ServiceFollow"
https://repost.aws/knowledge-center/dms-s3-parquet-format
"How do I use the Fn::Sub function in AWS CloudFormation with Fn::FindInMap, Fn::ImportValue, or other supported functions?"
"I want to use the Fn::Sub intrinsic function in AWS CloudFormation with Fn::FindInMap, Fn::ImportValue, or other supported functions."
"I want to use the Fn::Sub intrinsic function in AWS CloudFormation with Fn::FindInMap, Fn::ImportValue, or other supported functions.Short descriptionYou can use the Fn::Sub intrinsic function to substitute supported functions or to substitute variables in an input string with values that you specify.To substitute the value from supported functions, you must use variable map with the name and value as shown below:JSON:{ "Fn::Sub" : [ String, { Var1Name: Var1Value, Var2Name: Var2Value } ] }YAML:!Sub - String - Var1Name: Var1Value Var2Name: Var2ValueResolutionUsage of Fn::Sub with Ref functionThe following example uses a mapping to substitute the Domain variable with the resulting value from the Ref function.JSON:{ "Parameters": { "RootDomainName": { "Type": "String", "Default": "example123.com" } }, "Resources": { "DNS": { "Type": "AWS::Route53::HostedZone", "Properties": { "Name": { "Fn::Sub": [ "www.${Domain}", { "Domain": { "Ref": "RootDomainName" } } ] } } } }}YAML:Parameters: RootDomainName: Type: String Default: example123.comResources: DNS: Type: 'AWS::Route53::HostedZone' Properties: Name: !Sub - 'www.${Domain}' - Domain: !Ref RootDomainNameUsage of Fn::Sub with Fn::FindInMap functionThe following example uses a mapping to substitute the log_group_name variable with the resulting value from the Fn::FindInMap function.JSON:{ "Mappings": { "LogGroupMapping": { "Test": { "Name": "test_log_group" }, "Prod": { "Name": "prod_log_group" } } }, "Resources": { "myLogGroup": { "Type": "AWS::Logs::LogGroup", "Properties": { "LogGroupName": { "Fn::Sub": [ "cloud_watch_${log_group_name}", { "log_group_name": { "Fn::FindInMap": [ "LogGroupMapping", "Test", "Name" ] } } ] } } } }}YAML:Mappings: LogGroupMapping: Test: Name: test_log_group Prod: Name: prod_log_groupResources: myLogGroup: Type: 'AWS::Logs::LogGroup' Properties: LogGroupName: !Sub - 'cloud_watch_${log_group_name}' - log_group_name: !FindInMap - LogGroupMapping - Test - NameUsage of Fn::Sub with Fn::ImportValue functionThe following example uses a mapping to substitute the Domain variable with the resulting value from the Fn::ImportValue function.Note: “DomainName” is the name of the Output exported by another CloudFormation stack.JSON:{ "Resources": { "DNS": { "Type": "AWS::Route53::HostedZone", "Properties": { "Name": { "Fn::Sub": [ "www.${Domain}", { "Domain": { "Fn::ImportValue": "DomainName" } } ] } } } }}YAML:Resources: DNS: Type: 'AWS::Route53::HostedZone' Properties: Name: !Sub - 'www.${Domain}' - Domain: !ImportValue DomainNameFollow"
https://repost.aws/knowledge-center/cloudformation-fn-sub-function
Why are properly functioning Amazon ECS tasks registered to ELB marked as unhealthy and replaced?
"Elastic Load Balancing (ELB) is repeatedly flagging properly functioning Amazon Elastic Container Service (Amazon ECS) tasks as unhealthy. These incorrectly flagged tasks are stopped, and then new tasks are started instead."
"Elastic Load Balancing (ELB) is repeatedly flagging properly functioning Amazon Elastic Container Service (Amazon ECS) tasks as unhealthy. These incorrectly flagged tasks are stopped, and then new tasks are started instead.Short descriptionSome Amazon ECS tasks have several dependencies and lengthy bootstrapping processes that can exceed the ELB health check grace period, even when functioning as intended. When Amazon ECS tasks don't respond to ELB health checks within the grace period, they're flagged as unhealthy. To increase the health check grace period for your service, complete the following steps.To troubleshoot ECS tasks failing an Application Load Balancer health check, see How can I get my Amazon ECS tasks running using the Amazon EC2 launch type to pass the Application Load Balancer health check in Amazon ECS?ResolutionIf no grace period is configured, then the service scheduler immediately replaces any targets marked as unhealthy. Change the grace period to allow more time for your Amazon ECS tasks to complete their processes and pass the health check.Note: To change the grace period, use the earlier version of the ECS console. To change to the earlier version of the console, toggle off New ECS Experience at the top of the navigation pane. Then, complete the following steps.Open the AWS Management Console.In the navigation bar, choose Services, and then select ECS from the list.Select your service from the Service Name list.Choose Update.Choose Next step.On the Step 2: Configure network page, change the Health check grace period to an appropriate time period for your service. The maximum time period is 2,147,483,647 seconds.Caution: To prevent delayed replacement of legitimately unhealthy Amazon ECS tasks, carefully estimate the required grace period for your lengthiest tasks. When setting your grace period, consider all relevant factors, such as bootstrap time and time to pull container images.Choose Next step, and then choose Update Service.You can also use these ways to increase the grace period:Use the HealthCheckGracePeriodSeconds parameter defined in the AWS::ECS::Service resource in AWS CloudFormation.Run the UpdateService command in the AWS Command Line Interface (AWS CLI), and increase the --health-check-grace-period-seconds value.Related informationAmazon ECS adds ELB health check grace periodFollow"
https://repost.aws/knowledge-center/elb-ecs-tasks-improperly-replaced
How do I manage the clock source for EC2 instances running Linux?
"How can I determine the clock source used by an Amazon Elastic Compute Cloud (Amazon EC2) instance running Linux, and how can I change it?"
"How can I determine the clock source used by an Amazon Elastic Compute Cloud (Amazon EC2) instance running Linux, and how can I change it?Short descriptionBy using an SSH client, you can find the current clock source, list the available clock sources, or change the clock source.Note: There are many clock sources available for Hardware Virtual Machine (HVM) instances, such as Xen, Time Stamp Counter (TSC), High Precision Event Time (HPET), or Advanced Configuration and Power Interface Specification (ACPI). For EC2 instances launched on the AWS Xen Hypervisor, it's a best practice to use the tsc clock source. Other EC2 instance types, such as C5 or M5, use the AWS Nitro Hypervisor. The recommended clock source for the AWS Nitro Hypervisor is kvm-clock.Note: AWS Graviton2 processors use arch_sys_counter as the clock source.ResolutionTo find the clock sourceOpen an SSH client into your EC2 instance, and then run the following commands to find the current and available clock sources.To find the currently set clock source, list the contents of the current_clocksource file:cat /sys/devices/system/clocksource/clocksource0/current_clocksourcexenTo list the available clock sources, list the contents of the available_clocksource file:cat /sys/devices/system/clocksource/clocksource0/available_clocksourcexen tsc hpet acpi_pmTo set the current clock source to a different value1.    Run bash as a superuser to override the current_clocksource:sudo bash -c 'echo tsc > /sys/devices/system/clocksource/clocksource0/current_clocksource'2.    Run the dmesg command to view the kernel messages:dmesg | lessIf the override was successful, this message appears:clocksource: Switched to clocksource tscNote: Rebooting the system causes the Linux kernel to reset the clock source.To permanently set the clock sourceTo permanently set the clock source, set the source in the system boot loader:1.    Set clocksource in the kernel command line parameter.For example, if you use grub2 and you want to set the clock source to "tsc", open /etc/default/grub in an editor. Then, add clocksource=tsc tsc=reliable for the GRUB_CMDLINE_LINUX option:GRUB_CMDLINE_LINUX="console=tty0 crashkernel=auto console=ttyS0,115200 clocksource=tsc tsc=reliable"2.    Generate the grub.cfg file:grub2-mkconfig -o /boot/grub2/grub.cfgRelated informationSet the time for your Linux instanceFollow"
https://repost.aws/knowledge-center/manage-ec2-linux-clock-source
How do I change domain ownership or other domain information in Route 53?
"I want to change the ownership of my domain, update contact information, or troubleshoot a domain name change operation in Amazon Route 53."
"I want to change the ownership of my domain, update contact information, or troubleshoot a domain name change operation in Amazon Route 53.ResolutionChange the domain ownerFor some domains, you can update the owner by changing the person or organization information listed for the domain. You can also update the contact information for the domain.For more information see Updating the contact information and ownership for a domain.Change ownership of a TLDWhen you change the owner of a domain, the registries for some TLDs require special processing. For these domains, submit a change of domain ownership form to AWS Support.For more information, see TLDs that require special processing to change the owner.Update ownership of multiple domains in a bulk updateTo update ownership for multiple domains to a different user, create a custom script listing all of the domain names that you want to update. Then, run the update-domain-contact AWS Command Line Interface (AWS CLI) command. Pass the script through the JSON file listed in the following example:aws route53domains update-domain-contact \ --region us-east-1 \ --cli-input-json file://C:\temp\update-domain-contact.jsonNote: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.TroubleshootingThe domain ownership email is sent to an inactive or nonexistent contactIf the domain verification email is sent to a contact that's no longer active, then the ownership change becomes stuck in the Pending status. Or, if the email is incorrect, then the change is stuck in the Domain ownership change in progress status. To resolve these issues, complete the following steps:Cancel the CHANGE_DOMAIN_OWNER operation by contacting AWS Support.After AWS Support cancels the operation, update the email address of your domain to the correct email address.Restart the CHANGE_DOMAIN_OWNER operation.Related informationWho is the owner of a domain?Follow"
https://repost.aws/knowledge-center/route53-edit-domain-ownership
How do I move my Amazon Redshift provisioned cluster from one VPC to another VPC?
I want to move an Amazon Redshift cluster from one Amazon Virtual Private Cloud (Amazon VPC) to another VPC.
"I want to move an Amazon Redshift cluster from one Amazon Virtual Private Cloud (Amazon VPC) to another VPC.Short descriptionTo move an Amazon Redshift provisioned cluster from one VPC to another:Confirm the AWS Identity and Access Management (IAM) roles and configuration details of the source cluster.Create a cluster subnet group.Take a snapshot of the source cluster.Restore the cluster to the new cluster subnet group.Associate the IAM roles.ResolutionNote: Be sure to stop writes to the original cluster during the migration. Otherwise, some data might not be backed up to the new cluster.Confirm the IAM rolesOpen the Amazon Redshift console, and then choose CLUSTERS on the navigation pane.Select the Amazon Redshift cluster that you want to move.At the top of the page, choose the Actions dropdown list, and then choose Manage IAM roles.Note the IAM roles that are associated with your cluster. You'll associate these roles with the new cluster later.Create a cluster subnet groupCreate a cluster subnet group. For VPC, choose the ID of the VPC that you want to migrate the cluster to, and then add any associated subnets.Take a manual snapshot of the source clusterCreate a manual snapshot. For Cluster identifier, choose the cluster that you want to migrate.Restore the cluster to the new cluster subnet groupChoose the snapshot that you created, choose Restore from snapshot, and then choose Restore to provisioned cluster.Configure the properties of the new cluster. By default, Amazon Redshift automatically selects the same properties as the source cluster. Be sure that these properties are different from the source cluster:Cluster identifier Virtual private cloud (VPC): the VPC that you want to migrate the cluster toChoose Restore.Associate the IAM rolesOn the navigation pane, choose CLUSTERS, and then choose the new cluster.Choose the Actions drop-down list, and then choose Manage IAM roles.From Available IAM roles, choose the roles that are associated with the source cluster.Choose Add IAM role, and then choose Done.After the snapshot is restored and the new cluster status changes to Available, follow these steps:Rename the old cluster (for example, "oldcluster-1").Rename the new cluster to the original cluster name (for example, "cluster-1").Resume write operations to the cluster from client applications.Delete the old cluster.Related informationManaging clusters in a VPCWhy can't I access a VPC to launch my Amazon Redshift cluster?How do I copy an Amazon Redshift provisioned cluster to a different AWS account?Follow"
https://repost.aws/knowledge-center/move-redshift-cluster-vpcs
Why can’t I seamlessly join my Amazon EC2 Windows instance to an AWS Managed Microsoft AD directory?
I can't join my Amazon Elastic Compute Cloud (Amazon EC2) Windows instance to my AWS Directory Service for Microsoft Active Directory.
"I can't join my Amazon Elastic Compute Cloud (Amazon EC2) Windows instance to my AWS Directory Service for Microsoft Active Directory.ResolutionFollow these steps to troubleshoot issues when seamlessly joining your Amazon EC2 Windows instance to an AWS Managed Microsoft AD.Note: AWS Systems Manager doesn’t support seamless domain join for interface VPC endpoint because there are no interface VPC endpoints for AWS Directory Service. For more information, see VPC endpoint restrictions and limitations.Verify prerequisitesConfirm that you meet all prerequisites for using AWS Systems Manager.Verify AWS Identity and Access Management (IAM) role policies1.    Open the IAM console, and then choose Roles from the navigation pane.2.    Choose the Role name for the IAM role associated with your instance to open the Summary page.3.    On the Permissions tab, for Permissions policies, confirm that the AmazonSSMDirectoryServiceAccess and AmazonSSMManagedInstanceCore policies are attached.If either policy is missing, then choose Attach policies. Search for the policy names, and then choose Attach policy.Verify that the required ports are openVerify that ports 53, 88, and 389 are open in the directory security group. To locate the security group for your directory:1.    Open the Amazon EC2 console, and then choose Security Groups from the navigation pane under Network & Security.2.    Sort the list by Security group name to find directoryid_controllers, where directory_id is your directory ID. For example, d-1234567891_controllers.Note: Use Microsoft's Portqry.exe command line utility to test the domain's connectivity to the required ports.Verify that the DNS servers on your EC2 instance are pointing to the directory DNS serversRun the following command to display the network adapter configuration on the instance:ipconfig /allTo locate the directory DNS servers:1.    Open the Directory Service console, and then choose Directories from the navigation pane.2.    Choose your Directory ID to open the Directory Details page and view the DNS address.Confirm that you can resolve the domain name from the instanceRun the following command, replacing domainname with your domain name.Using PowerShell:Resolve-DnsName domainnameUsing a command prompt:nslookup domainnameVerify DNS server configurationVerify that you configured the correct DNS server on the instance, and that the instance can reach the DNS server. Run the following Nltest command:nltest /dsgetdc:domainname /forceNote: Be sure to replace domainname with the DNS name, and not the NetBIOS name. For example, if your domain is example.com, then the DNS name is example.com, and the NetBIOS name is example.Verify that the instance is reporting as ManagedFirst, open the AWS Systems Manager console. Next, choose Fleet Manager from the navigation pane to view all managed instances, and confirm that the instance is listed and online.Then, confirm that a corresponding State Manager association for the document awsconfig_Domain_directoryid_domainname was automatically created for the instance. Follow these steps:1.    From the Systems Manager console, choose State Manager from the navigation pane.2.    Select the search bar, choose Instance ID, Equal, and then enter the instance_id.3.    Verify the output of the execution for the association under Execution History. Confirm that the Status is Success.If the status is Failed, then review the output and detailed status to identify the cause of the issue.If the status is Pending, then verify that you followed all the previous troubleshooting steps. Then, review the logs on the EC2 instance for any explicit error messages to identify the cause of the issue. For instructions, see the following Troubleshooting section.Confirm that you can manually join the instance to the domainVerify that your account has the required permission to add computer objects to the domain. For more information, see Delegate directory join privileges for AWS Managed Microsoft AD.Confirm a successful seamless domain joinRetry joining a domain to verify that the previous steps resolved the issue.1.    Open the AWS Systems Manager console, and then choose State Manager from the navigation pane.2.    Select the association that you created to join the domain, and then choose Apply association now.3.    Verify that the Status is Success.TroubleshootingIf you're still having issues joining a domain, then review the following logs on the EC2 instance for indications of the problem.For Amazon SSM agent logs:Navigate to the following location to review the Amazon SSM agent logs: C:\ProgramData\Amazon\SSM\Logsnetsetup.log file:Open a command prompt, and then enter the following command:%windir%\debug\netsetup.logFor information about netsetup.log error codes, see How to troubleshoot errors that occur when you join Windows-based computers to a domain on the Microsoft website.For Event Viewer logs:1.    Open the Windows Start menu, and then open Event Viewer.2.    Choose Windows Logs from the navigation pane.3.    For Windows Logs, choose System.4.    Review the Date and Time column to identify events that occurred during the operation to join the domain.Related informationJoin an EC2 instance to your AWS Managed Microsoft AD DirectoryFollow"
https://repost.aws/knowledge-center/ec2-windows-seamless-join-microsoft-ad
How do I terminate instances in my AWS Marketplace subscription and cancel the subscription?
I want to delete the instances that are launched with my AWS Marketplace subscription and cancel the subscription.
"I want to delete the instances that are launched with my AWS Marketplace subscription and cancel the subscription.Short descriptionHere are a few things to keep in mind while canceling your software subscription:It's a best practice to terminate all the instances associated with your software subscription before you cancel your subscription.If you cancel a subscription to a software product without terminating all running instances of the software, then you're charged for any software usage. You might also incur infrastructure related charges related to using the product.After you cancel the software subscription, you can no longer start new instances of the software, either from AWS Marketplace console or the Amazon Elastic Compute Cloud (Amazon EC2) console. If the subscription that you're canceling is for a product with a monthly fee, then you're charged a prorated amount of the monthly fee on your next bill.ResolutionTerminate the instancesTo terminate instances in a software subscription, do the following:Sign in to the AWS Marketplace console.Choose Manage subscriptions.On the Manage subscriptions page, choose Manage next to the software subscription that you want to cancel.Choose Actions, and then choose View instances.Confirm the AWS Regions where the instances are running.Terminate all the instances from the Amazon EC2 console. For more information, see How do I delete or terminate EC2 resources?Cancel the software subscriptionAfter you terminate all the instances associated with your AWS Marketplace subscription, you can cancel the subscription by doing the following:On the Manage subscriptions page, choose Manage next to the software subscription that you want to cancel.Choose Actions, and then choose Cancel subscription.Select the check box to acknowledge that running instances are charged to your account, and then choose Yes, cancel subscription.Related informationHow do I sell an Amazon EC2 Reserved Instance on the EC2 Reserved Instance Marketplace?Follow"
https://repost.aws/knowledge-center/cancel-marketplace-subscription
How do I resolve the "Parameter validation failed: parameter value 'abc' for parameter name 'ABC' does not exist" error in CloudFormation?
"When I create or update my AWS CloudFormation stack, I get the following error: "Parameter validation failed: parameter value 'abc' for parameter name 'ABC' does not exist." How can I resolve this error?"
"When I create or update my AWS CloudFormation stack, I get the following error: "Parameter validation failed: parameter value 'abc' for parameter name 'ABC' does not exist." How can I resolve this error?Short descriptionAWS CloudFormation returns the parameter validation failed error when one of the parameters that's used in your CloudFormation template is an AWS-specific parameter type.You can receive this error when you use an AWS-specific parameter:To pass a value that doesn't exist in the AWS Region or account during stack creation.As a property for a resource, and then delete this value out of band before you update the resource during the stack update.As a parameter in a child stack. The error occurs when the value of the child stack that's passed from the parent stack doesn't match the parameter type. The error also occurs when the parameter's resource doesn't exist in the account in that Region.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Create a stack1.    Open the AWS CloudFormation console.2.    In the navigation pane, choose Stacks.3.    Form the Stack name column, choose the stack that failed.4.    Choose the Parameters tab.5.    In the Key column, search for the ABC parameter with the abc value.6.    Check the Parameters section of the template that's used to create your stack to verify that resource abc matches the AWS-specific parameter type.7.    Verify that the abc resource for the ABC parameter exists in the Region or account. Use either the AWS Management Console or the AWS CLI command to describe the resource. To find the right command for your resource, see the Find the describe command for your resource section.Note: For example, if you use the parameter type AWS::EC2::VPC::Id, then check the Amazon Virtual Private Cloud (Amazon VPC) console for the resource.8.    If ABC is a parameter to the child stack, then you must pass the abc value. Choose Option A or Option B.(Option A) If you're referencing another resource in the parent stack, then verify that this resource matches the AWS-specific parameter type used in the child stack.Note: For example, the stack fails if you use the parameter type AWS::EC2::Subnet::Id (subnet) and refer to resource type AWS::EC2::VPC.(Option B) If the abc value passes directly from the parent stack, then verify that the abc resource for the ABC parameter exists in the Region or account. Use either the AWS Management Console or the AWS CLI command to describe the resource. To find the right command for your resource, see the Find the describe command for your resource section.For example, consider the following List parameter in the child stack:"SecurityGroups": { "Description": "List of security group IDs for the instances", "Type": "List<AWS::EC2::SecurityGroup::Id>"}The value to the parameter passes from the parent stack. For example:"ChildStack" : { "Type" : "AWS::CloudFormation::Stack", "Properties" : { "Parameters":{ "KeyPair" : { "Ref": "KeyPair" }, "ImageID" : { "Ref": "ImageID" }, "InstanceType" : { "Ref": "InstanceType" }, "SecurityGroups" : { "Ref": "SecurityGroup" } }Important: In the preceding example, verify that the value of the security group ID that's passed to the SecurityGroup parameter exists in the Region or account.9.    Create a new stack with a valid value for the parameter that exists in your Region or account and that matches the AWS-specific parameter type.Update the stackWhen a stack update fails, CloudFormation rolls back the changes. This means that you can't see the parameter value that's updated through the AWS CloudFormation console.You must change the value for the ABC parameter during the update. If you don't change the value, then the resource with the name or PhysicalID of abc might be deleted from the account out of band.1.    To verify that the resource exists, use either the AWS Management Console or the AWS CLI command to describe the resource. To find the right command for your resource, see the Find the describe command for your resource section.2.    If you're updating the stack by updating the ABC parameter, then follow steps 6,7, and 8 in the preceding Create a stack section.3.    Update the stack by passing a valid value to the ABC parameter.Find the describe command for your resourceChoose the right command for your resource:For AWS::EC2::Image::Id or List , use the command for AWS CLI version 1 or version 2.For AWS::EC2::Instance::Id or List , use the command for AWS CLI version 1 or version 2.For AWS::EC2::KeyPair::KeyName, use the command for AWS CLI version 1 or version 2.For AWS::EC2::SecurityGroup::GroupName, AWS::EC2::SecurityGroup::Id, List , or List , use the command for AWS CLI version 1 or version 2.For AWS::EC2::Subnet::Id or List , use the command for AWS CLI version 1 or version 2.For AWS::EC2::VPC::Id or List , use the command for AWS CLI version 1 or version 2.For AWS::Route53::HostedZone::Id or List , use the command for AWS CLI version 1 or version 2.For AWS::EC2::AvailabilityZone::Name or List , use the command for AWS CLI version 1 or version 2.For AWS::EC2::Volume::Id or List , use the command for AWS CLI version 1 or version 2.Follow"
https://repost.aws/knowledge-center/cloudformation-parameter-validation
How can I create a custom event pattern for an EventBridge rule?
"I want to capture certain events for AWS services with an Amazon EventBridge rule. However, I'm unable to create a custom event pattern that matches the event. How can I create a custom EventBridge event pattern?"
"I want to capture certain events for AWS services with an Amazon EventBridge rule. However, I'm unable to create a custom event pattern that matches the event. How can I create a custom EventBridge event pattern?ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.EventBridge accepts events from AWS Services, EventBridge Partners, and custom events. This article discusses JSON events originating from AWS services. You can create EventBridge rules with event patterns to filter incoming events. This way, the EventBridge rule matches only the desired events and forwards those events to the targets.Determine the JSON format of the incoming eventThere are three methods for determining the JSON format for an incoming event:Refer to this list of sample events from AWS services that EventBridge receives.EventBridge provides the EventBridge Sandbox tool to assist users with creating and validating event patterns. For example, if you are interested in an EC2 Instance State-change event, you can do the following:Open the EventBridge console.From the navigation pane, under Developer resources, select Sandbox.Scroll to the Sample event section, then select AWS events.From the Sample events menu, select EC2 Instance State-change Notification. This populates the window with the first sample event. For a given event type, multiple samples might be available.Create an EventBridge rule with a simple event pattern that matches all events for a given AWS service. For example, this event pattern matches all Amazon Elastic Compute Cloud (Amazon EC2) events:{ "source": [ "aws.ec2" ]}Note: Wildcards and empty events aren't allowed in the event pattern.Next, associate an SNS or a CloudWatch Log Group target with the rule to capture inbound events. The target must have the Configure target input option set to Matched events so the JSON emitted by the service is received correctly.Create an event pattern in the same JSON format as the incoming eventThe following rules apply to creating a valid matching event pattern:Any fields that you don't specify in your event pattern are automatically matched. For example, if Detail isn't specified in the event pattern, then the event pattern matches every event with any detail.To match fields that are one level down in the JSON structure, use curly brackets { }. A JSON viewer might be helpful if you're looking at larger event structures.The string to be matched from the JSON event must be in square brackets [ ]. You can include multiple values in square brackets so that the event is invoked when either of the values are present in an incoming event. For example, to invoke an event based on every event sent by Amazon EC2 or Amazon DynamoDB, use this filter:{ "source": [ "aws.ec2", "aws.dynamodb" ]}Step 1: Obtain incoming event using SNS / CloudWatch targetThis example shows a Route 53 event emitted to EventBridge. The ChangeResourceRecordSets API call represents the creation of an A record in an Amazon Route 53 hosted zone. An Amazon Simple Notification Service (Amazon SNS) topic or Amazon CloudWatch Log Group target captures the following event:{ "version": "0", "id": "d857ae5c-cc83-3742-ab88-d825311ee4e9", "detail-type": "AWS API Call via CloudTrail", "source": "aws.route53", "account": "123456789012", "time": "2019-12-05T16:50:53Z", "region": "us-east-1", "resources": [], "detail": { "eventVersion": "1.05", "userIdentity": { "type": "AssumedRole", "principalId": "AROAABCDEFGHIJKLMNOPQ:Admin", "arn": "arn:aws:sts::123456789012:assumed-role/Admin", "accountId": "123456789012", "accessKeyId": "ASIAABCDEFGH12345678", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "AROAABCDEFGHIJKLMNOPQ", "arn": "arn:aws:iam::123456789012:role/Admin", "accountId": "123456789012", "userName": "Admin" }, "webIdFederationData": {}, "attributes": { "mfaAuthenticated": "false", "creationDate": "2019-12-05T16:28:27Z" } } }, "eventTime": "2019-12-05T16:50:53Z", "eventSource": "route53.amazonaws.com", "eventName": "ChangeResourceRecordSets", "awsRegion": "us-east-1", "sourceIPAddress": "12.34.56.78", "userAgent": "console.amazonaws.com", "requestParameters": { "hostedZoneId": "Z1RP12345WXRQD", "changeBatch": { "changes": [ { "action": "CREATE", "resourceRecordSet": { "type": "A", "tTL": 300, "resourceRecords": [ { "value": "4.4.4.4" } ], "name": "test.example.us." } } ] } }, "responseElements": { "changeInfo": { "status": "PENDING", "id": "/change/C271P4WIKN511J", "submittedAt": "Dec 5, 2019 4:50:53 PM" } }, "additionalEventData": { "Note": "Do not use to reconstruct hosted zone" }, "requestID": "bbbf9847-96cb-45ef-b617-d535b9fe83d8", "eventID": "74e2d2c8-7497-4292-94d0-348272dbc4f7", "eventType": "AwsApiCall", "apiVersion": "2013-04-01" }}Step 2: Create the corresponding EventPatternThis example event pattern filters on a number of fields. For example, eventName, hostedZoneld, action, and type. Matching events must contain all the fields and corresponding values. The pattern isolates the A records created against a specific hosted zone.{ "source": [ "aws.route53" ], "detail": { "eventSource": [ "route53.amazonaws.com" ], "eventName": [ "ChangeResourceRecordSets" ], "requestParameters": { "hostedZoneId": [ "Z1RP12345WXRQD" ], "changeBatch": { "changes": { "action": [ "CREATE" ], "resourceRecordSet": { "type": [ "A" ] } } } } }}Test the event patternTest using the EventBridge consoleLeverage the EventBridge Sandbox: From the Sample event section, select or enter a sample event.Under Event pattern section, provide an event pattern. You can do this either by building an event pattern using the menus in the Event pattern form or by entering a custom event pattern with the Custom patterns (JSON editor).After both sections are populated, select Test pattern to confirm that the event pattern matches the given sample event.Test using the AWS CLIIn the AWS CLI, run the test-event-pattern command. To confirm that the event pattern matches, be sure that the result is true. By doing this, you can identify the JSON events sent by the AWS service and help your custom event pattern to capture specific events.Related informationAmazon EventBridge event patternsCreating Amazon EventBridge rules that react to eventsTutorial: Log AWS API calls using EventBridgeAmazon EventBridge - What's the difference between CloudWatch Events and EventBridge? (video)Follow"
https://repost.aws/knowledge-center/eventbridge-create-custom-event-pattern
How can I retrieve an Amazon S3 object that I deleted?
I want to retrieve an object that I deleted from my Amazon Simple Storage Service (Amazon S3) bucket.
"I want to retrieve an object that I deleted from my Amazon Simple Storage Service (Amazon S3) bucket.ResolutionYou can retrieve an object that you deleted from an Amazon S3 bucket when either of the following conditions are true:You turned on versioning for the bucket.The object is replicated to another S3 bucket through S3 replication or backed up using AWS Backup.Retrieve an object that was deleted from a bucket that was versioning-enabledIf the request to delete the object didn't include the version-id of the object, then you can retrieve the object. For more information, see How can I retrieve an Amazon S3 object that was deleted in a versioning-enabled bucket?Retrieve an object that was replicated through S3 replicationIf the object was replicated through S3 replication, you can access the replica of the object by doing the following:Open the Amazon S3 console.In the list of buckets, choose the name of the bucket where your deleted object was replicated to.In the list of objects, look for the replica of the object.Retrieve an object that was backed up using AWS BackupYou can use the restore operation from AWS Backup to restore the entire S3 bucket, folders, or objects within the bucket. For more information, see Restoring S3 backups.Follow"
https://repost.aws/knowledge-center/s3-retrieve-deleted-object
How do I troubleshoot issues when passing environment variables to my Amazon ECS task?
I want to troubleshoot issues when passing environment variables to my Amazon Elastic Container Service (Amazon ECS) task.
"I want to troubleshoot issues when passing environment variables to my Amazon Elastic Container Service (Amazon ECS) task.Short descriptionYou can pass an environment variable inside your Amazon ECS task in one of the following ways:Pass the variable as an environmentFiles object inside an Amazon Simple Storage Service (Amazon S3) bucket.Store the variable inside an AWS Systems Manager Parameter Store.Store the variable in your ECS task definition.Store the variable inside AWS Secrets Manager.Note: It's a security best practice to use Parameter Store or Secrets Manager for storing your sensitive data as an environment variable.When you pass the environment variables in one of the preceding methods, you might get the following errors:Parameter Store:Fetching secret data from SSM Parameter Store in region: AccessDeniedException: User: arn:aws:sts::123456789:assumed-role/ecsExecutionRole/f512996041234a63ac354214 is not authorized to perform: ssm:GetParameters on resource: arn:aws:ssm:ap-south-1:12345678:parameter/status code: 400, request id: e46b40ee-0a38-46da-aedd-05f23a41e861-or-ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve secrets from ssm: service call has been retried 5 time(s): RequestCanceledSecrets Manager:ResourceInitializationError error-or-AccessDenied error on Amazon Elastic Compute Cloud (Amazon EC2)To resolve these errors, see How do I troubleshoot issues related to AWS Secrets Manager secrets in Amazon ECS?Amazon S3:ResourceInitializationError: failed to download env files: file download command: non empty error streamYou might face issues when you pass environment variables to your Amazon ECS tasks due to the following reasons:Your Amazon ECS task execution role doesn't have the required AWS Identity and Management (IAM) permissions.There are issues with your network configuration.Your application is unable to read the environment variable.The format of variable in the container definition is incorrect.The environment variable isn't automatically refreshed.ResolutionYour Amazon ECS task execution role doesn't have the required IAM permissionsIf you're using environment variables inside Parameter Store or Secrets Manage, then review AWS CloudTrail events for either of the following API calls:GetParameters for Parameter Store-or-GetSecretValue for Secrets ManagerIf you notice the AccessDenied error for task execution role in CloudTrail events, then manually add the required permissions as an inline policy to your ECS task execution IAM role. You can also create a customer managed policy and add the policy to your ECS task execution role.If you're using Secrets Manager, then include the following permissions to your task execution role:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "kms:Decrypt" ], "Resource": [ "arn:aws:secretsmanager:example-region:11112222333344445555:secret:example-secret", "arn:aws:kms:example-region:1111222233334444:key/example-key-id" ] } ]}If you're using the Parameter Store, then include the following permissions to your task execution role:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ssm:GetParameters", "secretsmanager:GetSecretValue", "kms:Decrypt" ], "Resource": [ "arn:aws:ssm:example-region:1111222233334444:parameter/example-parameter", "arn:aws:secretsmanager:example-region:1111222233334444:secret:example-secret", "arn:aws:kms:example-region:1111222233334444:key/example-key-id" ] } ]}To use an S3 bucket for storing the environment variable as a .env file, manually add the following permissions as an inline policy to the task execution role:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::example-bucket/example-folder/example-env-file" ] }, { "Effect": "Allow", "Action": [ "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::example-bucket" ] } ]}There are issues with your network configurationIf your ECS task is in a private subnet, verify the following:Be sure that the security group for the task or service allows egress traffic on port 443.If you're using a VPC endpoint, be sure that the network ACL allows egress traffic on port 443.Verify the connectivity to Systems Manager/Secrets Manager and Amazon S3 endpoint using the telnet command.If you're using a NAT gateway, then be sure that your task has a default route to the NAT gateway.Be sure that you defined the VPC endpoints for your tasks. If you defined the VPC endpoints, then be sure that you have the required VPC endpoints for Secrets Manager/Systems Manager Parameter Store and S3.If you're using a VPC endpoint, be sure of the following:The security group for your VPC endpoint allows egress traffic from the task or service on port 443.The VPC endpoint is associated with the correct VPC.The VPC attributes enableDnsHostnames and enableDnsSupport are turned on.If your ECS task is in a public subnet, verify the following:Be sure that task has a public IP address enabled.Be sure that the security group of your VPC has outbound access on port 443 to the internet.Be sure that the network ACL configuration allows all traffic to flow in and out of the subnets to the internet.Your application is unable to read the environment variableTo check whether the correct environment variables are populated inside your task container, do the following:List out all the environment variables that are exposed inside the container.Verify that this list includes the environment variables that you defined in the task definition or the .env file in S3.If you're using the Amazon EC2 or AWS Fargate launch types, then it's a best practice to use the ECS Exec feature. You can use this feature to run commands in or get a shell to a container running on an Amazon EC2 instance or Fargate. After enabling this feature, run the following command to interact with your container.aws ecs execute-command --cluster example-cluster \--task example-task-id \--container example-container \--interactive \--command "/bin/sh"If you're using the Amazon EC2 launch type, you can also use the Docker exec command to interact with your container. In this case, do the following:Connect to the container instance where your task is running. Then, run the following Docker command to find the container ID of your task container.docker container psRun the following command to interact with the container:docker exec -it example-container-id bashNote: Select the shell according to your container default shell.After establishing connection with the container, run the env command on your container to get the complete list of your environment variables. Review this list to make sure that the environment variables that you defined in the task definition or .env file are present.The format of variable in the container definition is incorrectWhen you define environment variables inside container definition, be sure to define the environment variables as KeyValuePair objects similar to the following:"environment": [{ "name": "foo", "value": "bar"}]Be sure to use this format when you define the environment variables in your .env files as well.The environment variable isn't automatically refreshedWhen you update the environmental variable in your .env file, the variable doesn't get automatically refreshed in your running container.To inject the updated values of environmental variables in your task, update the service by running the following command:aws ecs update-service --cluster example-cluster --service example-service --force-new-deploymentIf you're using environment variables in your container definition, then you must create a new task definition to refresh the updated environment variables. With this new task definition, you can create a new task or update your ECS service.aws ecs update-service --cluster example-cluster --service example-service --task-definition <family:revision>Note:Keep the following in mind when you pass environment variables to your task:If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file.If multiple environment files are specified and they contain the same variable, they are processed in the order of entry. The first value of the variable is used and subsequent values of duplicate variables are ignored. It's a best practice to use unique variable names.If an environment file is specified as a container override, the file is used. Any other environment files specified in a container definition are ignored.The environment variables are available to the PID 1 processes in a container from the file /proc/1/environ. If the container is running multiple processes or init processes, such as wrapper script, start script, or supervisord, then the environment variable is unavailable to non-PID 1 processes.Related informationSpecifying environment variablesFollow"
https://repost.aws/knowledge-center/ecs-task-environment-variables
Why is my Aurora PostgreSQL-Compatible DB instance snapshot taking so long to copy?
"My Amazon Aurora PostgreSQL-Compatible Edition DB instance snapshot is taking a long time to copy. The dashboard shows 100%, but the snapshot export is still in progress."
"My Amazon Aurora PostgreSQL-Compatible Edition DB instance snapshot is taking a long time to copy. The dashboard shows 100%, but the snapshot export is still in progress.Short descriptionAmazon Relational Database Service (Amazon RDS) and Amazon Aurora DB instances can be backed up by using the snapshot method. Snapshot copies involve copying automated backups or manual DB cluster snapshots. When you copy a snapshot, you create a manual snapshot. Snapshot exports involve exporting your DB cluster snapshot data to an Amazon Simple Storage Solution (Amazon S3) bucket.You can copy snapshot backups across different AWS Regions or within the same Region. You can also make multiple copies by using unique identifiers. Sometime these snapshot copies or exports can take long time.The time needed for a snapshot copy or export to complete is influenced by a number of factors, including:The size of the volumeWhether this is the first snapshot you have taken of the volume (full copy), or an incremental snapshotThe number of modified blocks since the previous snapshotShared network bandwidthWrite activity on the volumeNote: A first-time snapshot copy is always a full copy. This generally takes more time to complete. Subsequent copies of the snapshot to the same destination from the same target are incremental. This generally takes less time.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent version of the AWS CLI.Aurora PostgreSQL-Compatible DB instance snapshot is taking a long time to copySnapshot copy time also varies depending on number of factors, and can take multiple hours to copy. These factors include:The Regions involved in snapshot copy processThe amount of data to be copiedThe number of snapshot cross-Region requests occurring at the same time from given source RegionDepending on the AWS Regions involved and the amount of data to be copied, a cross-Region snapshot copy can take hours to complete. In some cases, there might be a large number of cross-Region snapshot copy requests from a given source Region. In such cases, Amazon RDS might put new cross-Region copy requests from that source Region into a queue. Amazon RDS does this until some in-progress copies complete. No progress information is displayed about copy requests while they are in the queue. Progress information is displayed only when the copy starts.The dashboard shows 100% but the snapshot export is still in progressWhen you are exporting a snapshot to Amazon S3, you might see that the task is in progress, but shows as 100%. During the export process, the initial data size is estimated, and then continuously corrected during the process. Percentages are calculated based on the extracted data over the estimated data size. So the percentage can show as 100% even when the status is still in progress. To monitor the snapshot progress, use the AWS CLI to run the DescribeExportTask command and including TotalExtractedDataInGB.Example:$ aws rds describe-export-tasks --export-task-identifier <TaskIdentifier>{ "ExportTasks": [ { "ExportTaskIdentifier": "XXX", "SourceArn": "arn:aws:rds:us-east-1:XXXX:snapshot:rds:XXXX-2022-11-06-09-54", "SnapshotTime": "2022-11-06T09:55:00.522000+00:00", "S3Bucket": "XXXX", "S3Prefix": "", "IamRoleArn": "arn:aws:iam::XXXX:role/service-role/XXXX", "KmsKeyId": "arn:aws:kms:us-east-1:XXXXX:key/XXXXXXX", "Status": "STARTING", "PercentProgress": 0, "TotalExtractedDataInGB": 0 } ]}Related informationCreating a DB cluster snapshotFollow"
https://repost.aws/knowledge-center/aurora-postgressql-backup-storage
"I dropped a user from an Amazon Redshift database, but the user still appears in the pg_class table"
Why do deleted Amazon Redshift users still appear in the pg_class table but not the pg_user table?
"Why do deleted Amazon Redshift users still appear in the pg_class table but not the pg_user table?Short descriptionDeleted users can still appear in the pg_class table when the dropped user owns an object in another database in the cluster. The DROP USER command only checks the current database for objects that are owned by the user who is about to be dropped. If the user owns an object in another database, then no errors are thrown. Instead, the dropped user retains ownership of the object and the usesysid still appears in the pg_class table.ResolutionRun the following command on each database in your cluster. This command checks for objects that are owned by deleted users.select distinct schemaname, tablename, tableowner from pg_tables where tableowner like '%unknown%';If a deleted user still owns objects in a database, you get an output similar to the following. The tableowner is "unknown" because the owner was deleted from the pg_user table, where usenames are stored.demo_localdb=# select distinct schemaname, tablename, tableowner from pg_tables where tableowner like '%unknown%'; schemaname | tablename | tableowner ------------+-----------------+------------------- demo_local | orders_nocomp_2 | unknown (UID=114) demo_local | orders_nocomp_3 | unknown (UID=115)(2 rows)Use the ALTER TABLE command to transfer ownership of the objects. In the following example, the new owner is userc.alter table demo_local.orders_nocomp_2 owner to userc;PreventionBefore dropping a user, run the following query to find objects that they own. (You can also use the v_find_dropuser_objs view to do this.) In the following example, the user is labuser.SELECT decode(pgc.relkind, 'r','Table', 'v','View' ) , pgu.usename, pgu.usesysid, nc.nspname, pgc.relname, 'alter table ' || QUOTE_IDENT(nc.nspname) || '.' || QUOTE_IDENT(pgc.relname) || ' owner to 'FROM pg_class pgc, pg_user pgu, pg_namespace ncWHERE pgc.relnamespace = nc.oidAND pgc.relkind IN ('r','v')AND pgu.usesysid = pgc.relownerAND nc.nspname NOT ILIKE 'pg\_temp\_%'AND pgu.usename = 'labuser'This query might not find users who own schemas that were created with a command similar to the following. In this example, labuser is the schema owner.create schema demo_local authorization labuser;Users with schema-level authorizations can't be dropped unless the schema is transferred to a new owner. Here's an example of how to transfer ownership to userc:alter table demo_local.orders_nocomp_2 owner to userc;Related informationHow do I resolve the "user cannot be dropped" error in Amazon Redshift?Querying the catalog tablesView database usersFollow"
https://repost.aws/knowledge-center/dropped-user-pg-class-table-redshift
How do I set up an Active/Active or Active/Passive Direct Connect connection to AWS from a public virtual interface?
How do I set up an Active/Active or Active/Passive AWS Direct Connect connection to AWS services from a public virtual interface?
"How do I set up an Active/Active or Active/Passive AWS Direct Connect connection to AWS services from a public virtual interface?Short descriptionWhen using Direct Connect to transport production workloads between AWS services, it's a best practice to create two connections through different data centers or providers. You have two options on how to configure your connections:Active/Active – Traffic is load-shared between interfaces based on flow. If one connection becomes unavailable, then all traffic is routed through the other connection.Active/Passive – One connection handles traffic, and the other is on standby. If the active connection becomes unavailable, then all traffic is routed through the passive connection.When configuring public virtual interfaces, you can use a public or private Autonomous System Number (ASN) for your on-premises peer router for the new virtual interface. The valid values are 1 to 2,147,483,647.Per the Internet Assigned Numbers Authority (IANA), the following ASNs are available for private use:2-byte private ASNs – 64,512 to 65,5344-byte private ASNs – 4,200,000,000 to 4,294,967,294ResolutionConfiguring an Active/Active connectionIf you're using a public ASN:Allow your customer gateway to advertise the same prefix (public IP or network that you own) with the same Border Gateway Protocol (BGP) attributes on both public virtual interfaces. This configuration permits you to load balance traffic over both public virtual interfaces.Check the vendor documentation for device-specific commands for your customer gateway device.If you're using a private ASN, load balancing on a public virtual interface isn't supported.Note: If you're using two Direct Connect connections with two public virtual interfaces for redundancy, then confirm that both interfaces are terminated on different AWS devices. To confirm this, check the AWS device IDs by opening the Direct Connect console, and then choose Connections.Configuring an Active/Passive connectionIf you're using a public ASN:Confirm that your customer gateway is advertising the same prefix (public IP or network that you own) on both BGP sessions.Identify the connection that you plan to set as the secondary connection. Then, start advertising the on-premises public prefixes with additional AS_Path prepends in the BGP attributes. For example, if your customer gateway uses ASN 123, then the gateway can advertise the prefix on the secondary connection with AS_Path set to 123 123 123 123. With this configuration, AWS always sends traffic to on-premises prefixes on the connection with the shorter AS_Path.Identify which connection you plan to set as the primary connection. Then, increase the Local Preference (local-pref to be sure that the on-premises router always chooses the correct path for sending traffic to AWS. A higher Local Preference (local-pref) value is preferred, and the default is 100. For more information, see Public virtual interface routing policies.The primary connection is considered the primary path. In the event of a failure, traffic is shifted to the secondary connection as a secondary path.If you're using a private ASN:Confirm that your customer gateway is advertising the longer prefix on your primary connection. For example, if you're advertising prefix X.X.X.0/24, then your customer gateway can advertise two prefixes (X.X.X.0/25 and X.X.X.128/25) on your primary connection. In this example, your customer gateway can also advertise prefix X.X.X.0/24 on your secondary connection.If both interfaces are UP, and the longer prefix is advertised on your primary connection, then traffic is sent to your customer gateway through the primary connection. In the event of a failure, traffic is shifted and sent to the secondary connection.Related informationAWS Direct Connect virtual interfacesConfigure redundant connections with AWS Direct ConnectWhich type of virtual interface should I use to connect different resources in AWS?Follow"
https://repost.aws/knowledge-center/dx-create-dx-connection-from-public-vif
How do I migrate from AWS WAF classic to AWS WAF and what is the downtime during the migration?
I want to migrate my current AWS WAF classic deployment to AWS WAF. How do I do this? Is there downtime involved in the migration?
"I want to migrate my current AWS WAF classic deployment to AWS WAF. How do I do this? Is there downtime involved in the migration?Short descriptionThere are three options to migrate from AWS WAF Classic to AWS WAF:Manual migrationAutomated using the AWS WAF security automationAutomated using the AWS WAF classic migration wizardImportant: Before starting the migration, see Migration caveats and limitations.ResolutionManual migrationManual migrations are suitable for simple AWS WAF deployments. A manual migration is re-creating classic AWS WAF resources using AWS WAF. The switch from the AWS WAF classic web ACL association to the new AWS WAF web ACL might cause a brief disruption.To perform a manual migration, do the following:To create a new AWS WAF deployment, see Getting started with AWS WAF.Complete the steps in Migrating a web ACL: switchover.See Migrating a web ACL: additional considerations to optimize the new AWS WAF deployment.AWS WAF security automation migrationUse AWS WAF security automation to automatically migrate to AWS WAF using AWS CloudFormation. Then, associate the new web ACL with a supported resource, such as:Amazon CloudFront distributionAmazon API Gateway REST APIApplication Load Balancer (ALB)AWS AppSync GraphQL APIThere's no downtime involved in this migration process. It's a best practice to test and tune your AWS WAF protections before implementing rules in production.Important: When migrating an AWS WAF classic deployment created by the AWS WAF security automation, you must not use the AWS WAF classic migration wizard. For additional information, see Migration caveats and limitations.To deploy a new web ACL using AWS WAF security automation, do the following:Open the AWS WAF Automation on AWS page.Navigate to AWS Solution overview.Choose Launch in the AWS Console on the right-hand side of the diagram.For Region, choose the AWS Region where you want to create your AWS WAF resources.For Create stack, use the default settings and then choose Next.Enter a Stack name and choose the Parameters for your use case. For information on Parameters, see Launch a stack.Important: Be sure that you choose the correct Endpoint Type. The type must match the resource you're currently using in AWS WAF classic. If you're using Amazon API Gateway REST API or Application Load Balancer, then choose ALB.Choose Next.(Optional) Configure stack options or use the default settings. Then, choose Next.Review your configuration. Then, acknowledge that CloudFormation will create AWS Identity and Access Management (IAM) resources in your account.Choose Create Stack.CloudFormation creates a new stack with all the resources required for the AWS security automation, including a new AWS WAF web ACL.Important: The new AWS WAF web ACL isn't automatically associated with any AWS resources.To complete the migration to AWS WAF, you must manually associate the AWS WAF web ACL with your AWS resource. This process automatically disassociates the AWS resource from the AWS WAF classic web ACL. After a resource is associated with this AWS WAF web ACL, requests are inspected by the rules in the new AWS WAF web ACL.After successfully migrating to AWS WAF, it's a best practice to review Migrating a web ACL: additional considerations to optimize the new AWS WAF deployment.Note: You might need to manually re-create existing rules that can't be automatically migrated. For more information, see Migrating a web ACL: manual follow-up.Automated migration using the AWS WAF classic migration wizardUse the AWS WAF Classic migration wizard to automatically migrate existing AWS WAF classic resources to AWS WAF. There are cases where the AWS WAF classic migration must not be used. For more information, see Migration caveats and limitations.There's no downtime involved in this migration process. It's a best practice to test and tune your AWS WAF protections before implementing rules in production.To deploy a new web ACL using automated AWS WAF classic migration wizard, do the following:Open the AWS WAF console.In the navigation pane, choose Switch to AWS WAF Classic.In the navigation pane, choose Web ACLs.At the top of the main page, choose the migration wizard.For Web ACL, choose the AWS Region where you want to create your AWS WAF resources. Then, choose the AWS WAF classic web ACL that you want to migrate.For Migration configuration, choose Create new to create a new S3 bucket to be used by CloudFormation during the migration. Note: The S3 bucket must be in the same Region as the web ACL and its name must start with the prefix aws-waf-migration-.It's a best practice to use Auto apply the bucket policy required for migration to avoid permission issues.Choose your preferred option for Choose how to handle rules that can't be migrated.Note: It's a best practice to use Exclude rules that can't be migrated to continue the migration. However, you must manually create rules that can't be automatically migrated when the migration has completed.Choose Next.Choose Start creating CloudFormation template.Choose Create CloudFormation Stack to start the deployment of the AWS WAF CloudFormation stack.For Create stack, use the default settings and then choose Next.Enter a Stack name and choose the Parameters for your use case. For information on Parameters, see Launch a stack.Important: Be sure that you choose the correct Endpoint Type. The type must match the resource you're currently using in AWS WAF classic. If you're using Amazon API Gateway REST API or Application Load Balancer, then choose ALB.Choose Next.(Optional) Configure stack options or use the default settings. Then, choose Next.Review your configuration, then choose Create Stack.CloudFormation creates a new stack with all the resources that are migrated from AWS WAF classic, including a new AWS WAF web ACL.Important: The new AWS WAF web ACL isn't automatically associated with any AWS resources.To complete the migration to AWS WAF, you must manually associate the AWS WAF web ACL with your AWS resource. This process automatically disassociates the AWS resource from the AWS WAF classic web ACL. After a resource is associated with this AWS WAF web ACL, requests are inspected by the rules in the new AWS WAF web ACL.After successfully migrating to AWS WAF, it's a best practice to review Migrating a web ACL: additional considerations to optimize the new AWS WAF deployment.Note: You might need to manually re-create existing rules which could not be automatically migrated. For more information, see Migrating a web ACL: manual follow-up.Related informationMigrating your rules from AWS WAF Classic to the new AWS WAFFollow"
https://repost.aws/knowledge-center/migrate-waf-classic-to-aws-waf
"When I start my instance with encrypted volumes attached, the instance immediately stops with the error "client error on launch"."
"I launched an Amazon Elastic Compute Cloud (Amazon EC2) instance that has encrypted volumes attached, but the instance doesn't start. The instance immediately goes from a pending state to shutting down, and then finally to a terminated state. I ran the AWS Command Line Interface (AWS CLI) command describe-instances on the terminated instance and get the following error:...."StateReason": {"Code": "Client.InternalError""Message": "Client.InternalError: Client error on launch"},....How can I resolve this?"
"I launched an Amazon Elastic Compute Cloud (Amazon EC2) instance that has encrypted volumes attached, but the instance doesn't start. The instance immediately goes from a pending state to shutting down, and then finally to a terminated state. I ran the AWS Command Line Interface (AWS CLI) command describe-instances on the terminated instance and get the following error:...."StateReason": {"Code": "Client.InternalError""Message": "Client.InternalError: Client error on launch"},....How can I resolve this?Short descriptionThis issue occurs with EC2 instances with encrypted volumes attached if:The AWS Key Management Service (AWS KMS) or AWS Identity and Access Management (IAM) user launching the instances don't have the required permissions.The KMS key usage is restricted by the SourceIp condition key.The IAM user must have permission to AWS KMS to decrypt the AWS KMS key.To allow access to decrypt a KMS key, you must use the key policy with IAM policies or grants. IAM policies alone aren't sufficient to allow access to a KMS key, but you can use them in combination with a KMS key's policy.KMS keys by default grant access only to the root account. When EC2 full privilege is provided to an IAM user or role, AWS KMS permissions must explicitly grant access to the KMS keys policy.ResolutionCreate an IAM policy to allow the IAM principal to call AWS KMS APIsNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.1.    Open the IAM console, choose Policies, and then choose Create policy.2.    Choose the JSON tab, and then copy and paste this policy, using your key ARN for Resource:{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:ReEncrypt*", "kms:Encrypt", "kms:GenerateDataKey*", "kms:DescribeKey", "kms:CreateGrant" ], "Resource": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab" } ]}3.    Choose Review policy.4.    In Name, enter a name that is meaningful to you, and then choose Create policy.5.    Choose the policy that you created in step 4.6.    Choose the Policy usage tab, and then choose Attach.7.    In Name, choose the IAM entity that you want to grant permission to KMS key, and then choose Attach policy.Grant the IAM principal explicit access to a KMS key1.    Open the AWS KMS console, and choose Customer managed keys.2.    In Key ID, choose your Key ID.3.    In Key users, choose Add.4.    In Name, choose the IAM user or role, and then choose Add.Note: If you're using a custom key policy instead of the default key policy, then the KMS key must explicitly grant the following permissions:{ "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::123456789012:role/MyRoleName", "arn:aws:iam::123456789012:user/MyUserName" ] }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*" }, { "Sid": "Allow attachment of persistent resources", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::123456789012:role/MyRoleName", "arn:aws:iam::123456789012:user/MyUserName" ] }, "Action": [ "kms:CreateGrant", "kms:ListGrants", "kms:RevokeGrant" ], "Resource": "*", "Condition": { "Bool": { "kms:GrantIsForAWSResource": "true" } } }Use IP address conditionIf you use AWS KMS to protect your data in an integrated service, then be careful when specifying the IP address condition operators or the aws:SourceIp condition key in the same access policy statement. Attaching an encrypted Amazon Elastic Block Store (Amazon EBS) volume to an Amazon EC2 instance causes Amazon EC2 to send a request to AWS KMS. The request decrypts the volume's encrypted data key. This request comes from an IP address associated with the EC2 instance and not the user's IP address. This means that the decryption request is rejected if you have a SourceIp condition set, and the instance fails.Use the kms:ViaService condition key. AWS KMS allows interactions from that service on your behalf. Be sure that the principals have permission to use the KMS key and integrated service. For more information, see kms:ViaService condition key limits.Note: EC2 instances with logged-on users can't interact with this condition—only the service on your behalf can. This interaction is logged in AWS CloudTrail logs for you to review.In the following example, the CloudTrail entry for an API call is made to AWS KMS. This is called on by Amazon EC2 infrastructure, not from a specific IP address. When you add a policy to a user that allows AWS KMS to interact with Amazon EC2, then the API call can complete."userIdentity": { "sessionContext": { "sessionIssuer": { "accountId": "450822418798", "principalId": "450822418798:aws:ec2-infrastructure", "userName": "aws:ec2-infrastructure", "arn": "arn:aws:iam::450822418798:role/aws:ec2-infrastructure", "type": "Role" },... "eventType": "AwsApiCall", "@log_group": "CloudTrail/AllRegionLogGroup", "awsRegion": "eu-west-1", "requestParameters": { "encryptionContext": { "aws:ebs:id": "vol-0ca158925aa9c1883" }}Related informationUsing policy conditions with AWS KMSHow can I be sure that authenticated encryption with associated data encryption is used when I'm calling the AWS KMS APIs?Follow"
https://repost.aws/knowledge-center/encrypted-volumes-stops-immediately
Why isn't my AWS Chatbot receiving messages from Amazon SNS?
"I subscribed an AWS Chatbot to my Amazon Simple Notification Service (Amazon SNS) topic. However, notifications from my Amazon SNS topic aren't reaching the AWS Chatbot. How do I troubleshoot the issue?"
"I subscribed an AWS Chatbot to my Amazon Simple Notification Service (Amazon SNS) topic. However, notifications from my Amazon SNS topic aren't reaching the AWS Chatbot. How do I troubleshoot the issue?Short descriptionIf your Amazon SNS topic's notifications aren't reaching your AWS Chatbot, then one of the following is misconfigured:(For Slack only) The communication channel between the AWS Chatbot and the Slack channel.(For Slack and Amazon Chime) The communication channel between the Amazon SNS topic and the AWS Chatbot.ResolutionTurn on CloudWatch Logs for your AWS ChatbotFollow the instructions in Accessing Amazon CloudWatch Logs for AWS Chatbot.(For Slack only) Verify that the communication channel between the AWS Chatbot and the Slack channel is configured correctlyMake sure that the Slack channel isn't archived or deletedArchived or deleted Slack channels can't receive messages. All the apps in archived or deleted Slack channels are deactivated.To unarchive a channel, see Archive or delete a channel in the Slack help center.Note: You can't undelete a Slack channel. If the subscribed Slack channel is deleted, you must create a new Slack channel and configure the new channel to receive notifications from your topic.Make sure that the AWS Chatbot app is installed on your Slack workspaceReview your AWS Chatbot CloudWatch Logs for the following error message: account_inactive. If you see an account_inactive error message, then your AWS Chatbot app isn't installed on your Slack workspace.To install the AWS Chatbot app on your Slack workspace, follow the instructions in Set up chat clients for AWS Chatbot.(For private Slack channels only) Make sure that the AWS Chatbot app is added to the Slack channelReview your AWS Chatbot CloudWatch Logs for the following error message: channel_not_found. If you see a channel_not_found error message, then your AWS Chatbot app hasn't been added to the private channel.To add the AWS Chatbot to a private Slack channel, run the /invite @AWS command in the private channel.(For Slack and Amazon Chime) Verify that the communication channel between the Amazon SNS topic and the AWS Chatbot is configured correctlyMake sure that your AWS Chatbot is subscribed to your Amazon SNS topic1.    Open the AWS Chatbot console.2.    Under Configured clients, choose Slack or Amazon Chime based on your use case.3.    Choose your Slack channel in the Slack workspace configuration, or your webhook in the Amazon Chime webhooks list.4.    Choose Edit.5.    In the Details pane, under Topics, verify that your Amazon SNS topic is listed. If the topic isn't listed, you must subscribe your Amazon SNS topic to your AWS Chatbot.Make sure that the AWS Chatbot endpoint is listed as a topic subscription for your Amazon SNS topic1.    Open the Amazon SNS console.2.    In the left navigation pane, choose Topics. Then, choose the name of your Amazon SNS topic.3.    Under Topic subscriptions, make sure that the following AWS Chatbot endpoint is listed: https://global.sns-api.chatbot.amazonaws.com. If the AWS Chatbot endpoint isn't listed as a topic subscription, then you must subscribe your Amazon SNS topic to your AWS Chatbot.Note: To test the setup, use your AWS Chatbot configuration to send a test notification.Make sure that you're not manually publishing messages to your Amazon SNS topicAWS Chatbot doesn't support messages that are manually published to an Amazon SNS topic. Make sure that you send Amazon SNS notifications to your AWS Chatbot only through one of the services that are supported by AWS Chatbot.Make sure that the AWS service that's publishing messages to your Amazon SNS topic is supported by AWS ChatbotReview your AWS Chatbot CloudWatch Logs for the following error message: Event Received is not supported. If you see an Event Received is not supported error message, then the service publishing messages to your topic isn't supported by AWS Chatbot.For a full list of services that are supported by AWS Chatbot, see Using AWS Chatbot with other AWS services.Make sure that your Amazon SNS topic's access policy grants the required permissions for another AWS service to publish messages to the topic1.    Open the Amazon SNS console.2.    In the left navigation pane, choose Topics.3.    Choose the topic that you subscribed your AWS Chatbot to. Then, choose Edit.4.    Choose the Access policy tab. Then, review the Statement section of access policy. Make sure that the policy allows the correct AWS service to run the SNS:Publish API action.5.    If your Amazon SNS access policy doesn't allow the correct service to publish events to your topic, update the policy by doing the following:In the Details section of your topic page, choose Edit.Expand the Access policy section, and then add the required permissions.Note: For examples of Amazon SNS access policies, see Configure Amazon SNS topics for notifications in the Developer Tools console User Guide. Also, see Creating an Amazon SNS topic for budget notifications in the AWS Billing and Cost Management User Guide.Make sure that raw message delivery isn't activated on your Amazon SNS topicAWS Chatbot doesn't accept raw message delivery. To verify that raw message delivery is activated on your Amazon SNS topic, do the following:1.    Open the Amazon SNS console.2.    In the left navigation pane, choose Topics. Then, choose the name of your Amazon SNS topic.3.    In the Details pane, for Raw message delivery, verify if the status is listed as enabled or disabled.4.    If the status is listed as enabled, then turn off raw message delivery on your Amazon SNS topic by doing the following:Choose Edit.Choose Enable raw message delivery to deselect the raw message delivery option.Choose Save changes.(If you're using Amazon SNS topics with server-side encryption activated) Make sure that you include the required AWS Key Management Service (AWS KMS) key policy permissionsYour AWS KMS key policy must allow the service that's sending messages to publish to your encrypted SNS topics.Make sure that your AWS KMS key policy includes the following section:Important: Replace events.amazonaws.com with the AWS service principal for the service that's publishing to your encrypted SNS topics{ "Sid": "Allow CWE to use the key", "Effect": "Allow", "Principal": { "Service": "events.amazonaws.com" }, "Action": [ "kms:Decrypt", "kms:GenerateDataKey" ], "Resource": "*"}Note: To test the configuration using the AWS Management Console, your AWS Identity and Access Management (IAM) role requires permission to use the AWS KMS key.Make sure that you're not publishing messages to your Amazon SNS topic at a rate higher than 10 notifications per secondAWS Chatbot allows for 10 events per second. If more than 10 events per second are received, then additional messages are throttled.To verify whether your events are being throttled, review the EventsThrottled metric in your Amazon CloudWatch Logs for AWS Chatbot.(If you're using Amazon EventBridge) Make sure that your EventBridge events don't use input transformersAWS Chatbot doesn't recognize EventBridge input transformers. To verify if your Amazon EventBridge events aren't using input transformers, do the following:1.    Open the EventBridge console.2.    In the left navigation pane, choose Rules. Then, choose the name of your configured event rule.3.    Select the check box next to the Amazon SNS topic that you've configured as a target for the rule. Then, choose View details.4.    Verify if there's Input Transformer listed under Input section on the details page or not. If Input Transformer is listed, then remove the input transformers from your rule.Note: For more information, see Transforming Amazon EventBridge target input.(If you're using EventBridge) Make sure that you're not sending event notifications from AWS services that AWS Chatbot doesn't support through EventBridgeAWS Chatbot doesn't support event notifications sent through Amazon EventBridge from the following AWS services:Amazon CloudWatchAWS CodeBuildAWS CodeCommitAWS CodeDeployAWS CodePipelineRelated informationTroubleshooting AWS ChatbotHow do I use webhooks to publish Amazon SNS messages to Amazon Chime, Slack, or Microsoft Teams?Test notifications from AWS services to Amazon Chime or Slack using CloudWatchFollow"
https://repost.aws/knowledge-center/sns-aws-chatbot-message-troubleshooting
How do I resolve "ResourceInitializationError: failed to validate logger args" error in Amazon ECS?
"When I run a task in Amazon Elastic Container Service (Amazon ECS), I receive a "ResourceInitializationError: failed to validate logger args" error."
"When I run a task in Amazon Elastic Container Service (Amazon ECS), I receive a "ResourceInitializationError: failed to validate logger args" error.Short descriptionWhen an Amazon ECS task can't find the Amazon CloudWatch log group that's defined in the task definitionAmazon, Amazon ECS returns a ResourceInitialization error. You receive the following error message: "ResourceInitializationError: failed to validate logger args: create stream has been retried 1 times: failed to create Cloudwatch log stream: ResourceNotFoundException: The specified log group does not exist. : exit status 1"To resolve the error, create a new log group for the task.ResolutionTo resolve the ResourceInitialization error, review the following solutions to create a new log group for the task.If you don't know which log group is defined in the task definition and returning an error, then run the following command:aws ecs describe-task-definition --task-definition nginx-fargate:3 | jq -r .taskDefinition.containerDefinitions[].logConfigurationThe output describes the log group that you must recreate in CloudWatch.Create a CloudWatch log group in the console1.    Open the CloudWatch console.2.    From the navigation bar, choose the Region where Amazon ECS cluster is located.3.    In the left navigation pane, choose Logs and then select Log groups.4.    In the Log groups window, choose Create log group.Create a CloudWatch log group using AWS CLICreate a CloudWatch log group with the create-log-group AWS Command Line Interface (AWS CLI) command. The following example command creates a log group named mylogs:Note: If you receive errors when running AWS CLI commands, then make sure that you’re using the most recent version of the AWS CLI.aws logs create-log-group --log-group-name mylogsUse the auto-configuration feature in the Amazon ECS consoleThe auto-configure option creates a log group on your behalf using the task definition family name with ecs as the prefix. The following example specifies a Log Configuration in your Task Definition:{ "containerDefinitions": [ { "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-create-group": "true", "awslogs-group": "awslogs-wordpress", "awslogs-region": "us-west-2", "awslogs-stream-prefix": "awslogs-example" } } } ]}You can also create a custom log group with the following steps:1.    Specify log configuration options.   Add the key awslogs-create-group with a value of true. This creates the log group on your behalf.The following example specifies a Log Configuration in your Task Definition with options set:{ "containerDefinitions": [ { "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "example_container", "awslogs-region": "eu-west-1", "awslogs-create-group": "true", "awslogs-stream-prefix": "example" } } } ]}Note: Managed AWS Identity and Access Management (IAM) policy AmazonECSTaskExecutionRolePolicy doesn't include logs:CreateLogGroup permissions. To use the awslogs-create-group option, add logs:CreateLogGroup as an inline IAM policy.Follow"
https://repost.aws/knowledge-center/ecs-resource-initialization-error
How do I set up table mappings in AWS DMS?
How do I set up table mappings in AWS Database Migration Service (AWS DMS)?
"How do I set up table mappings in AWS Database Migration Service (AWS DMS)?ResolutionUse the steps in this article to set up table mapping on your AWS DMS task using the AWS DMS console. For more information on working with table mapping and how it can be used, see Using table mapping to specify task settings. You can also use transformations in a table mapping to perform tasks like renaming tables, and removing a table column. You must choose at least one selection rule for your task to work correctly, and to use transformation rules on your task. For more information, see the limitations of using transformation rules.Follow these steps to set up table mappings.Open the AWS DMS console, and then choose Database migration tasks from the navigation pane.Choose Create task.Enter the details for your Task configuration and Task settings.Choose Enable CloudWatch logs.From the Table mappings section, choose Guided UI. You can also choose JSON editor to enter the mappings in JSON format.From the Selection rules section, choose Add new selection rule.Note: To add more than one selection rule, choose Add new selection rule again.Enter your Schema and a Table name. Enter % to select all available schemas or tables.For Action, choose Include or Exclude.Choose Create task.To add table mappings to a task that already exists:Choose Database migration tasks from the navigation pane.Choose your task, choose Actions, and then choose Modify.From the Table mappings section, expand Selection rules, and then choose Add new selection rule.Enter the details of your selection rule, and then choose Save.Related informationCreating a taskTroubleshooting migration tasks in AWS Database Migration ServiceHow can I turn on monitoring for an AWS DMS task?Follow"
https://repost.aws/knowledge-center/table-mappings-aws-dms
How do I connect to my Amazon MSK cluster using the Kafka-Kinesis-Connector?
"When I try to use the Kafka-Kinesis-Connector to connect with Amazon Managed Streaming for Apache Kafka (Amazon MSK), I receive an error message. How do I connect to my Amazon MSK cluster using the Kafka-Kinesis-Connector?"
"When I try to use the Kafka-Kinesis-Connector to connect with Amazon Managed Streaming for Apache Kafka (Amazon MSK), I receive an error message. How do I connect to my Amazon MSK cluster using the Kafka-Kinesis-Connector?Short descriptionTo connect to your MSK cluster using the Kafka-Kinesis-Connector, your setup must meet the following requirements:An active AWS subscription.A virtual private cloud (VPC) that is visible to both the client machine and MSK cluster. The MSK cluster and client must reside in the same VPC.Connectivity to MSK and Apache Zookeeper servers.Two subnets associated to your VPC.Topics created in MSK to send and receive messages from the server.ResolutionBuilding your project file1.    Clone the kafka-kinesis-connector project to download the Kafka-Kinesis-Connector.2.    Use the mvn package command to build the amazon-kinesis-kafka-connector-X.X.X.jar file in the target directory:[ec2-user@ip-10-0-0-71 kinesis-kafka-connector]$ mvn package........[INFO] Replacing /home/ec2-user/kafka-kinesis-connector/kafka-kinesis-connector/target/amazon-kinesis-kafka-connector-0.0.9-SNAPSHOT.jar with /home/ec2-user/kafka-kinesis-connector/kafka-kinesis-connector/target/amazon-kinesis-kafka-connector-0.0.9-SNAPSHOT-shaded.jar[INFO] ------------------------------------------------------------------------[INFO] BUILD SUCCESS[INFO] ------------------------------------------------------------------------[INFO] Total time: 28.822 s[INFO] Finished at: 2020-02-19T13:01:31Z[INFO] Final Memory: 26M/66M[INFO] ------------------------------------------------------------------------The Kafka-Kinesis-Connector looks for credentials in the following order: environment variables, java system properties, and the credentials profile file.3.    Update your configuration to the DefaultAWSCredentailsProviderChain setting:[ec2-user@ip-10-0-0-71 target]$ aws configureThis command makes sure that the access key attached to the AWS Identity and Access Management (IAM) user has the minimum required permissions. The aws configure command also makes sure that there is a policy available to access Amazon Kinesis Data Streams or Amazon Kinesis Data Firehose. For more information about setting AWS credentials, see Working with AWS credentials.Note: If you are using a Java Development Kit (JDK), you can also use the EnvironmentVariableCredentialsProvider class to provide credentials.4.    If you are using Kinesis Data Streams, then update your policy to the following:{ "Version": "2012-10-17", "Statement": [{ "Sid": "Stmt123", "Effect": "Allow", "Action": [ "kinesis:DescribeStream", "kinesis:PutRecord", "kinesis:PutRecords", "kinesis:GetShardIterator", "kinesis:GetRecords", "kinesis:ListShards", "kinesis:DescribeStreamSummary", "kinesis:RegisterStreamConsumer" ], "Resource": [ "arn:aws:kinesis:us-west-2:123xxxxxxxxx:stream/StreamName" ] }]}If you are using Kinesis Data Firehose, then update your policy to look like the following example:{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "firehose:DeleteDeliveryStream", "firehose:PutRecord", "firehose:PutRecordBatch", "firehose:UpdateDestination" ], "Resource": [ "arn:aws:firehose:us-west-2:123xxxxxxxxx:deliverystream/DeliveryStreamName" ] }]}For more information about the Kinesis Data Firehose delivery stream settings, see Configuration and credential file settings.Configuring the connectorNote: You can configure the Kafka-Kinesis-Connector to publish messages from MSK. Messages can be published to the following destinations: Amazon Simple Storage Service (Amazon S3), Amazon Redshift, or Amazon OpenSearch Service.1.    If you are setting up Kinesis Data Streams, you can configure the connector with the following values:name=YOUR_CONNECTER_NAMEconnector.class=com.amazon.kinesis.kafka.AmazonKinesisSinkConnectortasks.max=1topics=YOUR_TOPIC_NAMEregion=us-east-1streamName=YOUR_STREAM_NAMEusePartitionAsHashKey=falseflushSync=true# Use new Kinesis Producer for each PartitionsingleKinesisProducerPerPartition=true# Whether to block new records from putting onto Kinesis Producer if# threshold for outstanding records have reachedpauseConsumption=trueoutstandingRecordsThreshold=500000# If outstanding records on producers are beyond threshold sleep for following period (in ms)sleepPeriod=1000# If outstanding records on producers are not cleared sleep for following cycle before killing the taskssleepCycles=10# Kinesis Producer Configuration - https://github.com/awslabs/amazon-kinesis-producer/blob/main/java/amazon-kinesis-producer-sample/default_config.properties# All kinesis producer configuration have not been exposedmaxBufferedTime=1500maxConnections=1rateLimit=100ttl=60000metricsLevel=detailedmetricsGranuality=shardmetricsNameSpace=KafkaKinesisStreamsConnectoraggregation=true-or-If you are setting up a different type of stream, configure the Kinesis Data Firehose delivery stream properties like this:name=YOUR_CONNECTER_NAMEconnector.class=com.amazon.kinesis.kafka.FirehoseSinkConnectortasks.max=1topics=YOUR_TOPIC_NAMEregion=us-east-1batch=truebatchSize=500batchSizeInBytes=3670016deliveryStream=YOUR_DELIVERY_STREAM_NAME2.    Configure the worker properties for either standalone or distributed mode:bootstrap.servers=localhost:9092key.converter=org.apache.kafka.connect.storage.StringConvertervalue.converter=org.apache.kafka.connect.storage.StringConverter#internal.value.converter=org.apache.kafka.connect.storage.StringConverter#internal.key.converter=org.apache.kafka.connect.storage.StringConverterinternal.value.converter=org.apache.kafka.connect.json.JsonConverterinternal.key.converter=org.apache.kafka.connect.json.JsonConverterkey.converter.schemas.enable=truevalue.converter.schemas.enable=trueinternal.key.converter.schemas.enable=trueinternal.value.converter.schemas.enable=trueoffset.storage.file.filename=offset.logFor more information about Kafka-Kinesis-Connector's standalone or distributed mode, see Kafka Connect on the Apache website.3.    Copy the amazon-kinesis-kafka-connector-0.0.X.jar file to your directory and export classpath.Note: You can also add the amazon-kinesis-kafka-connector-0.0.X.jar file to the JAVA_HOME/lib/ext directory.4.    Run the kafka-kinesis-connector by using the following command syntax:[ec2-user@ip-10-0-0-71 kafka_2.12-2.2.1]$ ./bin/connect-standalone.sh /home/ec2-user/kafka-kinesis-connector/kafka-kinesis-connector/worker.properties /home/ec2-user/kafka-kinesis-connector/kafka-kinesis-connector/kinesis-kafka-streams-connecter.propertiesRelated informationCreating an Amazon MSK clusterFollow"
https://repost.aws/knowledge-center/kinesis-kafka-connector-msk
How do I troubleshoot issues with real-time metrics in Amazon Connect?
"In Amazon Connect, I want to troubleshoot common issues with real-time metrics."
"In Amazon Connect, I want to troubleshoot common issues with real-time metrics.ResolutionBefore troubleshooting issues with real-time metrics, be sure you have the permissions required to view real-time metric reports.Real-time metrics refreshThe numerical data for real-time metrics updates every 15 seconds for active pages. The real-time metrics reflect Agent status changes in the dashboard between Active, Availability, Missed, and Occupancy, with a small delay.Note: The Amazon Connect near real-time metrics refresh about a minute after a contact ends.Agent statusOnly agents who are logged in are seen in the real-time metrics dashboard. Agents that are logged out won't be seen in the real-time metrics dashboard.Viewing all queuesOnly active queues are seen on the real-time metrics report page. If you want to see multiple queues, then change the time range displayed.Note: The real-time metrics reports pull only the first 100 queues for active data. Active queues greater than 100 won't be seen.Barge live conversationsError message: Something went wrong-or-Failed to Monitor Agent DataTo troubleshoot issues with the barge feature, be sure you have turned on the Enhanced Monitoring capability. Also, be sure you have assigned the correct security profile permissions to barge a live conversation.Note: The status of the agent must be Available to barge a call.To barge a call, you must also set recording and analytics behavior in the contact flow. If the Set recording and analytics behavior flow block is missing, then you see a 400 Bad Request HTTP error.If you still can't barge a live conversation, then do the following:Create a HAR file to capture the events when using the barge feature.Create an AWS Support case.Attach the HAR file to the support case.APIs for real-time metricsThe GetCurrentMetricData API retrieves all the values of the real-time metrics dashboard. The dashboard is generated based on the values retrieved from the GetCurrentMetricData API.Contacts removed from the queue percentageReal-time metrics include SL X for contacts removed from the queue between zero and X (service level time in seconds). Both zero and X are considered in the calculations. This means that if the service level is 30, then the percentage of contacts removed from the queue between zero and 30 includes 30.For example, if you preset your X to 30, then the percentage of contacts removed from the queue calculation is between zero and 30. In this example, when calculating the service level of 30, the SL X metric includes both zero and 30 in the calculation.Follow"
https://repost.aws/knowledge-center/connect-real-time-metric-issues
How do I add a second Elastic IP address to an elastic network interface attached to my EC2 instance running CentOS 6 or RHEL 6?
How do I add a second Elastic IP address to an elastic network interface attached to my Amazon Elastic Compute Cloud (Amazon EC2) instance running CentOS 6 or RHEL 6 and have it persist during bootup?
"How do I add a second Elastic IP address to an elastic network interface attached to my Amazon Elastic Compute Cloud (Amazon EC2) instance running CentOS 6 or RHEL 6 and have it persist during bootup?Short descriptionWhen you add a second Elastic IP address to an elastic network interface, that Elastic IP address is lost when you reboot the interface. To make the second Elastic IP address persist during the reboot, you must create a second interface configuration file (ICF).ICFs control the software interfaces for individual network devices. The system uses these files as it boots to determine what interfaces to bring up and how to configure them.The default ICF is /etc/sysconfig/network-scripts/ifcfg-eth0. When two Elastic IP addresses exist on a single interface, the second Elastic IP address becomes ":1"—that is, /etc/sysconfig/network-scripts/ifcfg-eth0:1.ResolutionCreate a second interface configuration file1.    Attach two Elastic IP addresses to the elastic network interface from the Amazon EC2 console. For more information, see Multiple IP addresses.2.    Use the touch command to create the ifcfg-eth0:1 file for the second Elastic IP address in the /etc/sysconfig/network-scripts/ directory:$ sudo touch /etc/sysconfig/network-scripts/ifcfg-eth0:13.    Add the following parameters to the ifcfg-eth0:1 file:DEVICE=eth0:1BOOTPROTO=staticNETMASK=255.255.255.0ONBOOT=yesTYPE=EthernetIPADDR=172.31.34.195Note: IPADDR uses the private IP address associated with the second Elastic IP address you associated with the interface. After you select your instance, you can find the private IP address in the Amazon EC2 console under Secondary private IPs.Change the context of the second ICF to match the default ICF1.    To view the security context of the ifcfg-eth0:1 file, use the –Z option with the ls command:$ ls -Z ifcfg-eth*2.    Change the user to system_u using the –u option with the chcon command:$ sudo chcon -u system_u ifcfg-eth0:13.    Change the type to net_conf_t using the –t option with the chcon command:$ sudo chcon -t net_conf_t ifcfg-eth0:14.    Compare the two files by running the following command:$ ls -Z ifcfg-eth0*-rw-r--r--. root root system_u:object_r:net_conf_t:s0 ifcfg-eth0-rw-r--r--. root root system_u:object_r:net_conf_t:s0 ifcfg-eth0:1Bring up the interface1.    Bring up the second interface by running the ifup command:$ sudo ifup eth0:12.    If there are issues with the second ICF, run the ethtool command to verify detection of the second ICF:$ ethtool eth0:1The output appears similar to the following:Settings for eth0:1:Link detected: yesIf the output isn't as expected, run the ifup command and verify that the second interface is present. Then, review the ICF file to be sure that it is correct, and reload it.Reboot the instanceReboot your instance by running the reboot command.Related informationElastic IP addressesElastic network interfacesFollow"
https://repost.aws/knowledge-center/second-eip-centos-rhel-single-eni-ec2
Why can't I connect to my Amazon EC2 Linux instance using SSH?
I can't connect to my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance and want to troubleshoot the issue.
"I can't connect to my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance and want to troubleshoot the issue.Short descriptionTo troubleshoot the issue, log in to the EC2 instance over SSH with verbose messaging on. Use the output messages from the SSH client to determine the type of issue. Then, follow the troubleshooting steps in this article to resolve the issue.ResolutionLog in to your instance to identify the issue1.    Log in to the EC2 instance over SSH with verbose messaging on:user@localhost:~$ ssh -v -i my_key.pem ec2-user@11.22.33.44This example uses my_key.pem for the private key file, and a user name of ec2-user@11.22.33.44. Substitute your key file and your user name for the example's key file and user name. For more information, see Connect to your Linux instance using SSH.2.    Use the output messages from the SSH client to determine the type of issue you are experiencing.Use the EC2 Serial Console for Linux to troubleshoot Nitro-based instance typesIf you turned on EC2 Serial Console for Linux, you can use it to troubleshoot supported Nitro-based instance types. You can access the serial console using the serial console or the AWS Command Line Interface (AWS CLI). You don't need a working connection to connect to your instance when you use the EC2 Serial console.Before you use the serial console to troubleshoot:Grant access to the serial console at the account levelCreate AWS Identity and Access Management (IAM) policies granting access to your IAM usersCheck that your instance includes at least on password-based userIf connecting with EC2 Instance Connect using the AWS CLI, make sure that you’re using the most recent version of the AWS CLI.Troubleshoot common errorsError: "Connection timed out" or "Connection refused": To resolve this error, see I'm receiving "Connection refused" or "Connection timed out" errors when trying to connect to my EC2 instance with SSH. How do I resolve this?"connection timed out" errors on a virtual private cloud (VPC): To resolve this error, see How do I troubleshoot Amazon EC2 instance connection timeout errors from the internet?Error: "Permission denied" or "Authentication failed": To resolve this error, see I'm receiving "Permission denied (publickey)" or "Authentication failed, permission denied" errors when trying to access my EC2 instance. How do I resolve this?Error: "Server refused our key": To resolve this error, see Why am I getting a "Server refused our key" error when I try to connect to my EC2 instance using SSH?Error: "imported-openssh-key" or "Putty Fatal Error": To resolve this error, see Why am I receiving "imported-openssh-key" or "Putty Fatal Error" errors when connecting to my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance?Error: "Enter passphrase for key 'my_key.pem'":This error occurs if you created a password for your key file, but haven't manually entered the password. To resolve this error, enter the password or use ssh-agent to load the key automatically.Automatically troubleshoot SSH errorsThere are a number of reasons why you might get an SSH error, like Resource temporarily unavailable. Run the AWSSupport-TroubleshootSSH automation document to automatically find and resolve errors like this.Related informationHow do I troubleshoot issues connecting to my EC2 instance using EC2 Instance Connect?How do I troubleshoot SSH or RDP connectivity to my EC2 instances launched in a Wavelength Zone?Troubleshoot connecting to your instanceFollow"
https://repost.aws/knowledge-center/ec2-linux-ssh-troubleshooting
How can I receive Amazon SNS notifications when my AWS Glue job changes states?
"I want to get a notification when an AWS Glue extract, transform, and load (ETL) job succeeds, fails, times out, or stops."
"I want to get a notification when an AWS Glue extract, transform, and load (ETL) job succeeds, fails, times out, or stops.Short descriptionCreate and subscribe to an Amazon Simple Notification Service (Amazon SNS) topic. Then, create an Amazon EventBridge event rule for each state change that you want to monitor.Note: For this issue, it's a best practice to use Amazon EventBridge instead of Amazon CloudWatch.ResolutionCreate and subscribe to an Amazon SNS topic1.    Open the Amazon SNS console.2.    Select Topics, and then select Create topic.3.    Enter a Topic name. The Display name field is optional.4.    Select Create topic.5.    Select Subscriptions from the navigation pane. Then, choose Create subscription.6.    Under Details, complete the following fields:        For Topic ARN, choose the ARN of the topic that you created.        For Protocol, choose Email.        For Endpoint, enter the email address that you want the notifications to be sent to.7.    Select Create subscription.8.    Check your email account, and wait to receive a subscription confirmation email message. When you receive it, select the Confirm subscription link.Create an EventBridge event rule1.    Open the EventBridge console.2.    In the navigation pane, select Rules, and then select Create rule.3.    Enter a name for your rule. Leave the other fields as their default selections, and then select Next.4.    Scroll down to the Creation method section, and choose Custom pattern (JSON editor).5.    In the Event pattern box, enter code similar to the following. Replace job_name with the name of your AWS Glue ETL job. For state, enter the state changed that you want to be notified about (SUCCEEDED, FAILED, TIMEOUT, or STOPPED). Create separate event rules for each state change that you want to monitor:{ "detail-type": "Glue Job State Change", "source": "aws.glue", "detail": { "jobName": "MyJob", "state": "SUCCEEDED" }}6.    Select Next. This brings you to the Select target(s) page.7.    For Target types, choose AWS service. Then, choose SNS topic from the dropdown list.8.    In the Topic dropdown list, choose the name of the SNS topic that you created earlier.9.    Select Next. This brings you to the Configure tags - optional page. Select Next, and then select Create rule.To test the event rule and SNS topic, run an AWS Glue job. Verify that you receive an email notification when the job changes to the state that you specified in the event rule.Related informationHow can I use an AWS Lambda function to receive SNS alerts when an AWS Glue job fails a retry?How can I automatically start an AWS Glue job when a crawler run completes?Automating AWS Glue with CloudWatch EventsAWS Glue EventsFollow"
https://repost.aws/knowledge-center/glue-sns-notification-state
How do I use my CloudFront distribution to restrict access to an Amazon S3 bucket?
I want to restrict access to my Amazon Simple Storage Service (Amazon S3) bucket so that users access objects only through my Amazon CloudFront distribution.
"I want to restrict access to my Amazon Simple Storage Service (Amazon S3) bucket so that users access objects only through my Amazon CloudFront distribution.ResolutionImportant: Before you begin, be sure that the Amazon S3 origin of your CloudFront distribution is configured as a REST API endpoint. For example, AWSDOC-EXAMPLE-BUCKET.s3.amazonaws.com. This resolution doesn't apply to S3 origins that are configured as a website endpoint. For example, AWSDOC-EXAMPLE-BUCKET.s3-website-us-east-1.amazonaws.com. For more information, see How do I use CloudFront to serve a static website hosted on Amazon S3?Option 1 (Best practice): Create a CloudFront origin access control (OAC)Open the CloudFront console.From the list of distributions, choose the distribution that serves content from the S3 bucket that you want to restrict access to.Choose the Origins tab.Select the S3 origin, and then choose Edit.For Origin Access, select Origin access control settings (recommended).For Origin access control, select an existing OAC, or choose the Create Control setting.In the dialogue box, name your control setting. It's a best practice to keep the default setting as Sign requests (recommended). Then, choose Create.For S3 bucket Access, apply the bucket policy on the S3 bucket. Select Copy policy, and then select Save.Select Go to S3 bucket permissions to take you to the S3 bucket console.Select Save Changes.In the Amazon S3 console, from your list of buckets, choose the bucket that's the origin of the CloudFront distribution.Choose the Permissions tab.Under Bucket Policy, confirm that you see a statement similar to the following:{ "Version": "2012-10-17", "Statement": { "Sid": "AllowCloudFrontServicePrincipalReadOnly", "Effect": "Allow", "Principal": { "Service": "cloudfront.amazonaws.com" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", "Condition": { "StringEquals": { "AWS:SourceArn": "arn:aws:cloudfront::111122223333:distribution/EDFDVBD6EXAMPLE" } } }}You must add the preceding statement to allow CloudFront OAC to read objects from your bucket.Note: After you restrict access to your bucket using the CloudFront OAC, you have the option to add another layer of security by integrating AWS WAF.Option 2: Create a legacy CloudFront origin access identity (OAI)Open the CloudFront console.From the list of distributions, choose the distribution that serves content from the S3 bucket that you want to restrict access to.Choose the Origins tab.Select the S3 origin, and then choose Edit.For Origin Access, select Legacy access identities.In the Origin access identity dropdown list, select the origin access identity name, or choose Create new OAI.In the dialog box, name your new origin access identity, and choose Create.For Bucket policy, select Yes, update the bucket policy.Choose Save Changes.In the Amazon S3 console,from your list of buckets, Choose the bucket that's the origin of the CloudFront distribution.Choose the Permissions tab.Under Bucket Policy, confirm that you see a statement similar to the following:{{"Sid": "1","Effect": "Allow","Principal": {"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity EAF5XXXXXXXXX"},"Action": "s3:GetObject","Resource": "arn:aws:s3:::AWSDOC-EXAMPLE-BUCKET/*"}Note: Review your bucket policy for any statements with "Effect": "Deny" that prevent access to the bucket from the CloudFront OAI. Modify those statements so that the CloudFront OAI can access objects in the bucket.Also, review your bucket policy for any statements with "Effect": "Allow" that allow access to the bucket from any source that's not the CloudFront OAI. Modify those statements as required by your use case.Related informationCreating a distributionIdentity and access management in Amazon S3Follow"
https://repost.aws/knowledge-center/cloudfront-access-to-amazon-s3
How do I resolve cluster creation errors in Amazon EKS?
I get service errors when I provision an Amazon Elastic Kubernetes Service (Amazon EKS) cluster using AWS CloudFormation or eksctl.
"I get service errors when I provision an Amazon Elastic Kubernetes Service (Amazon EKS) cluster using AWS CloudFormation or eksctl.Short descriptionConsider the following troubleshooting options:You receive an error message stating that your targeted Availability Zone doesn't have sufficient capacity to support the cluster. Complete the steps in the Recreate the cluster in a different Availability Zone section.You receive an error message stating that resource creation failed. Complete the steps in the Confirm that you have the correct IAM permissions to create a cluster section, or the Monitor your Amazon VPC resources section.You receive an error message stating that the creation timed out when waiting for worker nodes. Complete the steps in the Confirm that your worker nodes can reach the control plane API endpoint section.ResolutionRecreate the cluster in a different Availability ZoneIf you launch control plane instances in an Availability Zone with limited capacity, then you can receive an error that's similar to the following:Cannot create cluster 'sample-cluster' because us-east-1d, the targeted availability zone, does not currently have sufficient capacity to support the cluster. Retry and choose from these availability zones: us-east-1a, us-east-1b, us-east-1cTo resolve the preceding error, create the cluster again using the recommended Availability Zones from the error message.If you're provisioning the cluster using CloudFormation, then in the Subnets parameter add subnet values that match the Availability Zones.-or-If you're using eksctl, then use the --zones flag to add the values for the different Availability Zones. For example:$ eksctl create cluster 'sample-cluster' --zones us-east-1a,us-east-1b,us-east-1cNote: Replace sample-cluster with your cluster name. Replace us-east-1a, us-east-1b, and us-east-1c with your Availability Zones.Confirm that you have the correct IAM permissions to create a clusterWhen you create a cluster, verify that you have the correct AWS Identity and Access Management (IAM) permissions. This includes correct policies for the Amazon EKS service IAM role.You can use eksctl to create the prerequisite resources for your cluster, such as the IAM roles and security groups. The required minimum permissions depend on the eksctl configuration that you're launching. For more information, see troubleshooting solutions from the eksctl GitHub community.If your cluster has issues with IAM permissions, then you can receive an error in eksctl that's similar to the following:API: iam:CreateRole User: arn:aws:iam::your-account-id:user/your-user-name is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::your-account-id:role/eksctl-newtest22-cluster-ServiceRole-10NXBYLSN4ULPTo resolve the preceding error, review the minimum IAM policies for running eksctl use cases on the eksctl website. Also, see Identity and Access Management for Amazon EKS, and How can I troubleshoot access denied or unauthorized operation errors with an IAM policy?Monitor your Amazon VPC resourcesWhen you create a cluster, eksctl creates a new Amazon Virtual Private Cloud (Amazon VPC) by default. If you don't want eksctl to create a new Amazon VPC, then you must specify your custom Amazon VPC and subnets in the configuration file.If your cluster has issues with your Amazon VPC limits, then you can receive the following error message:The maximum number of VPCs has been reached. (Service: AmazonEC2; Status Code: 400; Error Code: VpcLimitExceeded; Request ID: a12b34cd-567e-890-123f-ghi4j56k7lmn)To resolve the preceding error, monitor your resources. For example, check the number of Amazon VPCs in your AWS Region or the internet gateways per Region where you create the cluster. For more information, see Amazon VPC quotas.For issues related to resource constraints on the number of Amazon VPC resources in your Region, consider one of the following options:(Option 1) Use an existing Amazon VPC to overcome resource constraintsCreate a configuration file that specifies the Amazon VPC and subnets where you want to provision your cluster's worker nodes:$ eksctl create cluster sample-cluster -f cluster.yaml-or-(Option 2) Request a service quota increase to overcome resource constraintsRequest a service quota increase for the resources in the CloudFormation stack events of the cluster that eksctl provisioned.Confirm that your worker nodes can reach the control plane API endpointWhen eksctl deploys your cluster, it waits for the launched worker nodes to join the cluster and reach Ready status. If your worker nodes don't reach the control plane or have an invalid IAM role, then you can receive the following error:timed out (after 25m0s) waiting for at least 4 nodes to join the cluster and become ready in "eksfbots-ng1"To resolve the preceding error, get your worker nodes to join the cluster, and confirm that your worker nodes are in Ready status.Follow"
https://repost.aws/knowledge-center/eks-cluster-creation-errors
How do I create a Python 3 virtual environment with the Boto 3 library on Amazon Linux 2?
How do I create an isolated Python 3 virtual environment with the Boto 3 library on an Amazon Elastic Compute Cloud (Amazon EC2) instance or on-premise solution that's running Amazon Linux 2?
"How do I create an isolated Python 3 virtual environment with the Boto 3 library on an Amazon Elastic Compute Cloud (Amazon EC2) instance or on-premise solution that's running Amazon Linux 2?Short descriptionTo create an isolated Python environment for Amazon Linux 2, you must:1.    Install Python 3 for Amazon Linux 2.2.    Install a virtual environment under the ec2-user home directory.3.    Activate the environment, and then install Boto 3.ResolutionInstall Python 3 for Amazon Linux 21.    Connect to your EC2 Linux instance using SSH. For more information, see Connecting to your Linux instance using SSH.2.    Perform a yum check-update to refresh the package index. The check-update also looks for available updates. Updating other packages shouldn't be required to create the Python 3 environment.3.    Run list installed to determine if Python 3 is already installed on the host.[ec2-user ~]$ yum list installed | grep -i python3Python 3 not installed output example:[ec2-user ~]$ yum list installed | grep -i python3[ec2-user ~]$[ec2-user ~]$ python3-bash: python3: command not foundPython 3 already installed output example:[ec2-user ~]$ yum list installed | grep -i python3python3.x86_64 3.7.4-1.amzn2.0.4 @amzn2-corepython3-libs.x86_64 3.7.4-1.amzn2.0.4 @amzn2-corepython3-pip.noarch 9.0.3-1.amzn2.0.1 @amzn2-corepython3-setuptools.noarch 38.4.0-3.amzn2.0.6 @amzn2-core[ec2-user ~]$ whereis python3python3: //usr/bin/python3 /usr/bin/python3.7 /usr/bin/python3.7m /usr/lib/python3.7 /usr/lib64/python3.7 /usr/include/python3.7m /usr/share/man/man1/python3.1.gz4.    If Python 3 isn't already installed, then install the package using the yum package manager.[ec2-user ~]$ sudo yum install python3 -yCreate a virtual environment under the ec2-user home directoryThe following command creates the app directory with the virtual environment inside of it. You can change my_app to another name. If you change my_app, make sure that you reference the new name in the remaining resolution steps.[ec2-user ~]$ python3 -m venv my_app/envActivate the virtual environment and install Boto 31.    Attach an AWS Identity and Access Management (IAM) role to your EC2 instance with the proper permissions policies so that Boto 3 can interact with the AWS APIs. For other authentication methods, see the Boto 3 documentation.2.    Activate the environment by sourcing the activate file in the bin directory under your project directory.[ec2-user ~]$ source ~/my_app/env/bin/activate(env) [ec2-user ~]$3.    Make sure that you have the latest pip module installed within your environment.(env) [ec2-user ~]$ pip install pip --upgrade4.    Use the pip command to install the Boto 3 library within our virtual environment.(env) [ec2-user ~]$ pip install boto35.    Run Python using the python executable.(env) [ec2-user ~]$ pythonPython 3.7.4 (default, Dec 13 2019, 01:02:18)[GCC 7.3.1 20180712 (Red Hat 7.3.1-6)] on linuxType "help", "copyright", "credits" or "license" for more information.>>>6.    Import the Boto 3 library, and then validate that it works. This step requires that you have the permissions policies configured from step 1. The following example output lists all the Amazon Simple Storage Service (Amazon S3) buckets within the account.>>> import boto3 # no error>>> s3 = boto3.resource('s3')>>> for bucket in s3.buckets.all():print(bucket.name)>>> exit()7.    Use the deactivate command to exit the virtual environment.(env) [ec2-user ~]$ deactivate[ec2-user ~]$8.    To activate the virtual environment automatically when you log in, add it to the ~/.bashrc file.[ec2-user ~]$ echo "source ${HOME}/my_app/env/bin/activate" >> ${HOME}/.bashrc9.    Source the ~/.bashrc file in your home directory to reload your environment's bash environment. Reloading automatically activates your virtual environment. The prompt reflects the change (env). This change also applies to any future SSH sessions.[ec2-user ~]$ source ~/.bashrc(env) [ec2-user ~]$Related informationUpdating instance softwareLaunching an instance using the Launch Instance WizardVirtualenv introductionFollow"
https://repost.aws/knowledge-center/ec2-linux-python3-boto3
How can AWS WAF help prevent brute force login attacks?
How can I use AWS WAF to help prevent brute force attacks?
"How can I use AWS WAF to help prevent brute force attacks?Short descriptionA brute force attack is a tactic for gaining unauthorized access to accounts, systems, and networks using trial and error to guess login credentials and encryption keys. This attack is called brute force because a hacker uses excessive forceful attempts to gain access to your accounts.The following AWS WAF features help prevent brute force login attacks:Rate-based rule statementAWS WAF CAPTCHAATP managed rule groupAWS WAF Automation on AWSResolutionRate-based rulesA rate-based rule tracks requests based on the originating IP addresses. The rule invokes if the rate of request exceeds the defined threshold per five-minute interval.Create a rate-based rule to block requests if the rate of requests is greater than expected. To find the threshold for a rate-based rule, you must turn on AWS WAF logging and analyze the logs get the rate of requests. For information on how to create a rate-based rule, see Creating a rule and adding conditions.You can also create a rate-based rule specific to a URI path. Brute force attacks generally target the login pages to get access to account credentials. Different pages on a website might receive different rates of requests. For example, a home page might receive a higher rate of traffic compared to login pages.To create a rate-based rule specific to a login page, use the following rule configuration:For Inspect Request, choose URI path.For Match type, choose Starts with string.For String to match, choose /login.AWS WAF CAPTCHAAWS WAF CAPTCHA challenges verify if requests hitting your website are from a human or a bot. Using CAPTCHA helps prevent brute force attacks, credential stuffing, web scraping, and spam requests to servers.If webpages are designed to receive requests from humans but are susceptible to brute force attacks, then create a rule with a CAPTCHA action. CAPTCHA action requests allow access to a server when the CAPTCHA challenge is successfully completed.To set up a CAPTCHA action on your login page, use the following rule configuration:For Inspect choose URI path.For Match Type choose Starts with string.For String to match choose /login.For Action choose CAPTCHA.For Immunity time choose Time in seconds.If a CAPTCHA action is configured, users accessing your login page must complete the CAPTCHA before they can enter their login information. This protection helps prevent brute force attacks from bots.Note: To help prevent brute force attacks from a human, set a low immunity time. A low immunity time slows the attack as the attacker must complete the CAPTCHA for each request. For more information, see Configuring the CAPTCHA immunity time.For more information on AWS WAF CAPTCHA, see AWS WAF CAPTCHA.ATP Managed Rule GroupThe AWS WAF account takeover prevention (ATP) managed rule group inspects malicious requests that attempt to take over your account. For example, brute force login attacks that use trial and error to guess credentials and gain unauthorized access to your account.The ATP rule group is an AWS managed rule group that contains predefined rules that provide visibility and control over requests performing anomalous login attempts.Use the following subset of rules in the ATP rule group to help block brute force attacks:VolumetricIpHighInspects for high volumes of requests sent from individual IP addresses.AttributePasswordTraversalInspects for attempts that use password traversal.AttributeLongSessionInspects for attempts that use long lasting sessions.AttributeUsernameTraversalInspects for attempts that use username traversal.VolumetricSessionInspects for high volumes of requests sent from individual sessions.MissingCredentialInspects for missing credentials.For more information on how to set up an ATP rule-group, see AWS WAF Fraud Control account takeover prevention (ATP).AWS WAF Automation on AWSAWS WAF Security Automation is an AWS CloudFormation template used to deploy a web ACL with a set of rules. You can activate these rules based on your use case. When a hacker attempts to guess the correct credentials as part of brute force attack, they receive an error code for each incorrect login attempt. For example, an error code might be a 401 Unauthorized response.The Scanners and probes rule can block requests sourcing from an IP that is continuously receiving a specific response code. Activating this rule deploys an AWS Lambda function or an Amazon Athena query that automatically parses Amazon CloudFront, or Application Load Balancer (ALB), access logs to check the HTTP response code from your backend server. If the number of requests receiving the error code reaches a defined threshold, then the rule blocks these requests for a custom period of time that you can configure.For more information on this template and how to deploy it, see Automatically deploy a single web access control list that filters web-based attacks with AWS WAF Automation on AWS.Follow"
https://repost.aws/knowledge-center/waf-prevent-brute-force-attacks
"How can I monitor the account activity of specific IAM users, roles, and AWS access keys?"
I want to view and monitor the account activity of specific AWS Identity and Access Management (IAM) identities.
"I want to view and monitor the account activity of specific AWS Identity and Access Management (IAM) identities.Short descriptionTo view and monitor the account activity of specific IAM identities, you can use any of the following AWS services and features:AWS CloudTrail event historyAmazon CloudWatch Logs InsightsAmazon Athena queriesResolutionTo use CloudTrail event historyNote: You can use CloudTrail to search event history for the last 90 days.1.    Open the CloudTrail console.2.    Choose Event history.3.    In Filter, select the dropdown list. Then, choose User name.Note: You can also filter by AWS access key.4.    In the Enter user or role name text box, enter the IAM user's "friendly name" or the assumed role session name.Note: The role session name for a specific session is the value provided as a session name when the role is assumed. Value for "User name" field won't be the role name for calls made using the IAM role.5.    In Time range, enter the desired time range. Then, choose Apply.6.    In Event time, expand the event. Then, choose View event.The userIdentity element contains details about the type of IAM identity that made the request and the credentials provided.Example userIdentity element that includes IAM user credentials used to make an API callNote: Replace Alice with the username that you're searching for. Enter the IAM user's "friendly name" or the assumed role's "role session name." The role session name for a specific session is the value provided as a session name when the role is assumed. For calls made using the IAM role, the value for the userName field isn't the role name."userIdentity": { "type": "IAMUser", "principalId": "AIDAJ45Q7YFFAREXAMPLE", "arn": "arn:aws:iam::123456789012:user/Alice", "accountId": "123456789012", "accessKeyId": "AKIAIOSFODNN7EXAMPLE", "userName": "Alice"}Example userIdentity element that includes temporary security credentials"userIdentity": { "type": "AssumedRole", "principalId": "AROAIDPPEZS35WEXAMPLE:AssumedRoleSessionName", "arn": "arn:aws:sts::123456789012:assumed-role/RoleToBeAssumed/AssumedRoleSessionName", "accountId": "123456789012", "accessKeyId": "AKIAIOSFODNN7EXAMPLE", "sessionContext": { "attributes": { "mfaAuthenticated": "false", "creationDate": "20131102T010628Z" }, "sessionIssuer": { "type": "Role", "principalId": "AROAIDPPEZS35WEXAMPLE", "arn": "arn:aws:iam::123456789012:role/RoleToBeAssumed", "accountId": "123456789012", "userName": "RoleToBeAssumed" } }}Note: CloudTrail event history uses the assumed role session name as the username for filtering events.The API call uses temporary security credentials obtained by assuming an IAM role. The element contains additional details about the role assumed to get credentials.Note: If you don't see user activity, then verify that the AWS service is supported and the API event is recorded by CloudTrail. For more information, see AWS service topics for CloudTrail.To use CloudWatch Logs InsightsNote: You can use CloudWatch Logs Insights to search API history beyond the last 90 days. You must have a trail created and configured to log to Amazon CloudWatch Logs. For more information, see Creating a trail.1.    Open the CloudWatch console.2.    Choose Logs.3.    In Log Groups, choose your log group.4.    Choose Search Log Group.5.    In Filter events, enter a query to either search for a user's API calls, or specific API actions. Then, choose the refresh icon.Example query to search logs for a user's API callsNote: Replace Alice with the username that you're searching for. Enter the IAM user's "friendly name" or the assumed role's "role session name." The role session name for a specific session is the value provided as a session name when the role is assumed. For calls made using the IAM role, the value for the userName field isn't the role name.{ $.userIdentity.userName = "Alice" }Example query to search logs for specific API actionsNote: The following example query searches for the DescribeInstances API action.{ ($.eventName = "DescribeInstances") && ($.requestParameters.userName = "Alice" ) }For more information, see CloudWatch Logs Insights query syntax.To use Athena queriesNote: You can use Athena to query CloudTrail Logs over the last 90 days.1.    Open the Athena console.2.    Choose Query Editor.3.    Enter one of the following example queries based on your use case. Then, choose Run query:Example query to return all CloudTrail events performed by a specific IAM userImportant: Replace athena-table with your Athena table name. Replace Alice with the IAM user that you want to view account activity for.SELECT *FROM athena-tableWHERE useridentity.type = 'IAMUser'AND useridentity.username LIKE 'Alice';Example query to filter all the API activity performed by an IAM roleNote: Replace role-name with your IAM role name.SELECT *FROM athena-tableWHERE useridentity.sessionContext.sessionissuer.arn LIKE '%role-name%'AND useridentity.sessionContext.sessionissuer.type = 'Role';Example query to match the role ARNSELECT *FROM athena-tableWHERE useridentity.sessionContext.sessionissuer.arn = 'arn:aws:iam::account-id123456789:role/role-name'AND useridentity.sessionContext.sessionissuer.type = 'Role';Example query to filter for all activity using the IAM access key IDSELECT eventTime, eventName, userIdentity.principalId,eventSourceFROM athena-tableWHERE useridentity.accesskeyid like 'AKIAIOSFODNN7EXAMPLE'Related informationHow do I use AWS CloudTrail to track API calls to my Amazon EC2 instances?How do I use CloudTrail to see if a security group or resource was changed in my AWS account?Follow"
https://repost.aws/knowledge-center/view-iam-history
How do I resolve connection timeouts when I connect to my Service that's hosted in Amazon EKS?
I get connection timeouts when I connect to my Service that's hosted in my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
"I get connection timeouts when I connect to my Service that's hosted in my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.Short descriptionTwo of the most common reasons you can't connect to your Service in your Amazon EKS cluster are:The security group or network access control list (network ACL) restrictions are preventing traffic from reaching the pod endpoints.The Service doesn't select the pod endpoints because the labels don't match.To resolve these issues, check the security groups and network ACLs that are associated with your worker node instances and load balancer. Also, verify that your Service has the correct labels selected for your pods.Note: Troubleshooting varies for different service types. The following resolutions are applicable for when you're troubleshooting inaccessible services. To learn more about Kubernetes Service types, see How do I expose the Kubernetes services running on my Amazon EKS cluster?ResolutionCheck your security group and network ACLsCluster IPThe cluster IP service type is used for communication between microservices that run in the same Amazon EKS cluster. Make sure that the security group that's attached to the instance where the destination pod is located has an inbound rule to allow communication from the client pod instance.In most cases, there's a self rule that allows all communication over all ports in the worker node security groups. If you use multiple node groups, each with its own security group, make sure that you allow all communication between the security groups. This lets the microservices that run across the multiple nodes to communicate easily.To learn more, see Amazon EKS security group considerations.Node portThe worker node security group should allow incoming traffic on the portthat was specified in the NodePort Service definition. If it's not specified in the Service definition, then the value of the port parameter is the same as the targetPort parameter. The port is exposed on all nodes in the Amazon EKS cluster.Check the network ACLS that are linked to the worker node subnets. Make sure that your client IP address is on the allow list over the port that the Service uses.If you're accessing the Kubernetes Service over the internet, make sure that your nodes have a Public IP address. To access the Service, you must use the node's Public IP address and port combination.Load balancerMake sure that the load balancer security group allows the listener ports. Also, make sure that the worker node security group allows incoming traffic from the load balancer security group over the port where the application container is running.If the port that's specified in the Service definition is different from the targetPort, then you must allow the incoming traffic over the port in the worker node security group for the load balancer security group. The port and targetPort are usually the same in the Service definition.The network ACLs must allow your client IP address to reach the load balancer at the listener port. If you're accessing the load balancer over the internet, then make sure that you created a public load balancer.Check if your Service selected the pod endpoints correctlyIf your pods aren't registered as backends for the Service, then you can receive a timeout error. This can happen when you access the Service from a browser or when you run the curl podIP:podPort command.Check the labels for the pods and verify that the Service has the appropriate label selectors (from the Kubernetes website).Run the following commands to verify if your Kubernetes Service correctly selected and registered your pods.Command:kubectl get pods -o wideExample output:NAME                    READY   STATUS    RESTARTS   AGE       IP          NODE                         NOMINATED NODE   READINESS GATESnginx-6799fc88d8-2rtn8   1/1   Running     0       3h4m   172.31.33.214 ip-172-31-33-109.us-west-2.compute.internal none noneCommand:kubectl describe svc your_service_name -n your_namespaceNote: Replace your_service_name with your service name and your_namespace with your namespace.Example output:Events:            noneSession Affinity:  noneEndpoints:         172.31.33.214:80....In the preceding example output, 172.31.33.214 is the pod IP address that was obtained from running the kubectl get pods -o wide command. The 172.31.33.214 IP address also serves as the backend to a service that's running in an Amazon EKS cluster.Follow"
https://repost.aws/knowledge-center/eks-resolve-connection-timeouts
Why did GuardDuty send me alert findings for a trusted IP list address?
I followed the instructions to set up a trusted IP address list for Amazon GuardDuty. Why is GuardDuty sending me alert findings for my trusted IP address?
"I followed the instructions to set up a trusted IP address list for Amazon GuardDuty. Why is GuardDuty sending me alert findings for my trusted IP address?ResolutionUse the following best practices to verify the trusted IP list settings:Be sure that the trusted IP lists uploaded in the same AWS Region as your GuardDuty findings.Verify that the trusted IP lists are activated. For instructions, see To activate or deactivate trusted IP lists and threat lists.If you changed the trusted IP list, you must reactivate it in GuardDuty. For instructions, see To update trusted IP lists and threat lists.Be sure that IP addresses added in the trusted IP list are publicly routable IPv4 addresses. Support for IPv6 addresses isn't available.Adding a domain name, private IP address, or IPv6 address in a trusted IP list doesn't prevent GuardDuty from generating findings.In member accounts, GuardDuty generates findings for malicious IP addresses from the threat lists uploaded in the GuardDuty administrator account, not the trusted IP lists. For more information, see Managing GuardDuty accounts with AWS Organizations.Related informationWorking with trusted IP lists and threat listsHow to use Amazon GuardDuty and AWS Web Application Firewall to automatically block suspicious hostsFollow"
https://repost.aws/knowledge-center/guardduty-trusted-ip-list-alert
How do I modify the values of an Amazon RDS DB parameter group?
How do I modify the values of an Amazon Relational Database Service (Amazon RDS) DB parameter group? How can I resolve an issue that I experienced when trying to change my Amazon RDS DB instance configuration?
"How do I modify the values of an Amazon Relational Database Service (Amazon RDS) DB parameter group? How can I resolve an issue that I experienced when trying to change my Amazon RDS DB instance configuration?Short descriptionYou can modify parameter values in a custom DB parameter group. However, you can't change the parameter values in a default DB parameter group. If you're experiencing an issue while modifying the value of a DB parameter group, review the following common issues:If you use commands such as SET, you might receive an error, because these commands can't be used to update RDS DB instance configurations.If you can't update the DB instance configuration, it might be because you can't change the values of a default RDS DB parameter group.If you changed the parameter values but the changes aren't in effect, it might be because not all modifications are applied immediately.If you can't modify DB parameters under any circumstances, it might be because the parameter's property value for Is Modifiable is false.For more information, see Working with DB parameter groups.ResolutionTo change an RDS DB instance configuration, you must change the parameter values of the DB parameter group for your RDS DB instance. To modify an RDS DB instance configuration, follow these steps:Create a DB parameter group.View the parameter values for a DB parameter group to confirm that the Is Modifiable property is true.Modify the parameters in a DB parameter group.After the custom DB parameter group is applied (by using Apply immediately or by using Apply during the maintenance window), the DB parameter group status for that instance changes to pending-reboot in Amazon RDS console. This means that the parameter group is applied, but the parameter changes aren't applied yet. After a manual reboot of the RDS DB instance, the parameter changes are applied and the DB parameter group status for the instance changes from pending-reboot to in-sync.DB instances require a manual reboot in the following circumstances:If you replace the current parameter group with a different parameter group.If you modify and save a static parameter in a custom parameter group.Static parameter change takes effect after you manually reboot the RDS DB instance. For more information, see Modifying parameters in a DB parameter group.A reboot doesn't occur in the following circumstances:If you modify a dynamic parameter in a custom parameter group.For more information, see Amazon RDS DB parameter changes not taking effect.Related informationModifying an Amazon RDS DB instanceManaging an Amazon Aurora DB clusterFollow"
https://repost.aws/knowledge-center/rds-modify-parameter-group-values
How do I register for AWS Educate?
I’m interested in registering for AWS Educate. Am I eligible?
"I’m interested in registering for AWS Educate. Am I eligible?ResolutionAWS Educate is open to any individual, regardless of where they are in their education, technical experience, or career journey.AWS Educate provides access to online training resources and labs to learn, practice, and evaluate cloud skills without having to create an Amazon or AWS account.Individual learners can register for AWS Educate for free. To get started, see AWS Educate registration.Related informationAWS Educate frequently asked questionsHow do I get support from the AWS Educate program?Follow"
https://repost.aws/knowledge-center/aws-educate-signup
How do I resolve the error that I receive in CloudFormation when I try to publish slow logs to CloudWatch Logs?
I want to resolve the error that I receive in AWS CloudFormation when I try to publish slow logs to Amazon CloudWatch Logs. The error is: "The Resource Access Policy specified for the CloudWatch Logs log group /aws/aes/domains/search/search-logs does not grant sufficient permissions for Amazon Elasticsearch Service to create a log stream."
"I want to resolve the error that I receive in AWS CloudFormation when I try to publish slow logs to Amazon CloudWatch Logs. The error is: "The Resource Access Policy specified for the CloudWatch Logs log group /aws/aes/domains/search/search-logs does not grant sufficient permissions for Amazon Elasticsearch Service to create a log stream."Short descriptionTo resolve this error, use a separate policy at the log group level to allow Amazon Elasticsearch Service (Amazon ES) to push logs to CloudWatch Logs. Then, use AccessPolicies in the AWS::Elasticsearch::Domain resource to set permissions for Amazon ES domains.The following steps show you how to publish slow logs to CloudWatch with CloudFormation by using an AWS Lambda-backed custom resource created in Python 3.6. The custom resource triggers a Lambda function, which triggers the PutResourcePolicy API to publish slow logs.Note: When CloudFormation enables log publishing to CloudWatch, the AWS::Logs::LogGroup resource doesn't have a property to assign a resource access policy. The resource access policy specified for the CloudWatch Logs log group should grant sufficient permissions for Amazon ES to publish the log stream. You can't create an access policy permission directly using a CloudFormation resource. This is because the PutResourcePolicy API call for the AWS::Logs::LogGroup resource isn't supported by CloudFormation.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.The following CloudFormation template uses a custom resource to get the name of the log group. Then, the template applies the policy that allows the Amazon ES service to make API calls on the log group.1.    Create a CloudFormation template called ESlogsPermission.yaml:Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.Permission is hereby granted, free of charge, to any person obtaining a copy of thissoftware and associated documentation files (the "Software"), to deal in the Softwarewithout restriction, including without limitation the rights to use, copy, modify,merge, publish, distribute, sublicense, and/or sell copies of the Software, and topermit persons to whom the Software is furnished to do so.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR APARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHTHOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTIONOF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THESOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.AWSTemplateFormatVersion: 2010-09-09Description: AWS cloudFormation template to publish slow logs to Amazon CloudWatch Logs.Parameters: LogGroupName: Type: String Description: Please don't change the log group name while updating ESDomainName: Description: A name for the Amazon Elastic Search domain Type: String LambdaFunctionName: Description: Lambda Function Name Type: StringResources: AwsLogGroup: Type: 'AWS::Logs::LogGroup' Properties: LogGroupName: !Ref LogGroupName LambdaLogGroup: Type: 'AWS::Logs::LogGroup' Properties: LogGroupName: !Sub '/aws/lambda/${LambdaFunctionName}' LambdaExecutionRole: Type: 'AWS::IAM::Role' Properties: AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: Service: - lambda.amazonaws.com Action: - 'sts:AssumeRole' Path: / Policies: - PolicyName: root1 PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: - 'logs:CreateLogStream' - 'logs:PutLogEvents' Resource: !Sub >- arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/${LambdaFunctionName}:log-stream:* - PolicyName: root2 PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: - 'logs:CreateLogGroup' Resource: - !Sub >- arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/${LambdaFunctionName} - !Sub >- arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/${LogGroupName} - Effect: Allow Action: - 'logs:PutResourcePolicy' - 'logs:DeleteResourcePolicy' Resource: !Sub 'arn:aws:logs:${AWS::Region}:${AWS::AccountId}:*' logGroupPolicyFunction: DependsOn: LambdaLogGroup Type: 'AWS::Lambda::Function' Properties: FunctionName: !Ref LambdaFunctionName Code: ZipFile: > import urllib3 import json import boto3 http = urllib3.PoolManager() SUCCESS = "SUCCESS" FAILED = "FAILED" def send(event, context, responseStatus, responseData, physicalResourceId=None, noEcho=False): responseUrl = event['ResponseURL'] print(responseUrl) responseBody = {} responseBody['Status'] = responseStatus responseBody['Reason'] = 'See the details in CloudWatch Log Stream: ' + context.log_stream_name responseBody['PhysicalResourceId'] = physicalResourceId or context.log_stream_name responseBody['StackId'] = event['StackId'] responseBody['RequestId'] = event['RequestId'] responseBody['LogicalResourceId'] = event['LogicalResourceId'] responseBody['NoEcho'] = noEcho responseBody['Data'] = responseData json_responseBody = json.dumps(responseBody) print("Response body:\n" + json_responseBody) headers = { 'content-type' : '', 'content-length' : str(len(json_responseBody)) } try: response = http.request('PUT',responseUrl,body=json_responseBody.encode('utf-8'),headers=headers) print("Status code: " + response.reason) except Exception as e: print("send(..) failed executing requests.put(..): " + str(e)) def handler(event, context): logsgroup_policy_name=event['ResourceProperties']['CWLOGS_NAME'] cw_log_group_arn=event['ResourceProperties']['CWLOG_ARN'] cwlogs = boto3.client('logs') loggroup_policy={ "Version": "2012-10-17", "Statement": [{ "Sid": "", "Effect": "Allow", "Principal": { "Service": "es.amazonaws.com"}, "Action":[ "logs:PutLogEvents", " logs:PutLogEventsBatch", "logs:CreateLogStream" ], 'Resource': f'{cw_log_group_arn}' }] } loggroup_policy = json.dumps(loggroup_policy) if(event['RequestType'] == 'Delete'): print("Request Type:",event['RequestType']) cwlogs.delete_resource_policy( policyName=logsgroup_policy_name ) responseData={} send(event, context, SUCCESS, responseData) elif(event['RequestType'] == 'Create'): try: cwlogs.put_resource_policy( policyName = logsgroup_policy_name, policyDocument = loggroup_policy ) responseData={} print("Sending response to custom resource") send(event, context, SUCCESS, responseData) except Exception as e: print('Failed to process:', e) send(event, context, FAILED, responseData) elif(event['RequestType'] == 'Update'): try: responseData={} print("Update is not supported on this resource") send(event, context, SUCCESS, responseData) except Exception as e: print('Failed to process:', e) send(event, context, FAILED, responseData) Handler: index.handler Role: !GetAtt - LambdaExecutionRole - Arn Runtime: python3.6 logGroupPolicycustomresource: Type: 'Custom::LogGroupPolicy' Properties: ServiceToken: !GetAtt - logGroupPolicyFunction - Arn CWLOGS_NAME: !Ref LogGroupName CWLOG_ARN: !GetAtt - AwsLogGroup - Arn ElasticsearchDomain: Type: 'AWS::Elasticsearch::Domain' DependsOn: logGroupPolicycustomresource Properties: DomainName: !Ref ESDomainName ElasticsearchVersion: '6.2' EBSOptions: EBSEnabled: true VolumeSize: 10 VolumeType: gp2 LogPublishingOptions: SEARCH_SLOW_LOGS: CloudWatchLogsLogGroupArn: !GetAtt - AwsLogGroup - Arn Enabled: true2.    To launch a CloudFormation stack with the ESlogsPermission.yaml file, use the CloudFormation console or the following AWS CLI command:aws cloudformation create-stack --stack-name yourStackName --template-body file://yourTemplateName --parameters ParameterKey=LogGroupName,ParameterValue=Your-LogGroup-Name, ParameterKey=ESDomainName,ParameterValue=Your-ES-Name --capabilities CAPABILITY_NAMED_IAM --region yourRegionNote: Replace yourStackName, yourTemplateName, Your-LogGroup-Name, Your-ES-Name, and yourRegion with your values.The CloudFormation template does the following for you:1.    Creates a log group.2.    Creates a Lambda function. The Lambda function gets the log group name from the Parameters section of the CloudFormation template by using a custom resource. The Lambda function calls the PutResourcePolicy API for the log group name. The log group must have a policy to allow the Amazon ES domain to put the logs.3.    Creates a Lambda-backed custom resource to invoke the Lambda function created in step 2. The custom resource helps apply PutResourcePolicy on the log group Amazon Resource Name (ARN) so that Amazon ES can stream logs. In the template, CloudFormation uses a custom resource to create an Amazon ES domain with the LogPublishingOption.Follow"
https://repost.aws/knowledge-center/cloudformation-slow-log-error
How is an Amazon EBS io2 Block Express volume different from io1 and io2 volumes?
I don't know how Amazon Elastic Block Store (Amazon EBS) io2 Block Express volumes differ from io1 and io2 volumes.
"I don't know how Amazon Elastic Block Store (Amazon EBS) io2 Block Express volumes differ from io1 and io2 volumes.Resolutionio1, io2, and io2 Block Express are all Provisioned IOPS SSD volumes. However, io2 Block Express volumes can deliver higher throughput and IOPS and support larger storage capacity. For more information on each volume type, see the chart under Provisioned IOPS SSD (io1 and io2) volumes.The following Amazon Elastic Compute Cloud (Amazon EC2) instance types support io2 Block Express volumes:c7gc6inm6inm6idnm7gr5br6inr6idnr7gtrn1x2idnx2iednWhen you attach an io2 volume to these Amazon EC2 instance types, the volume automatically becomes an io2 Block Express volume.The price for Amazon EBS volume types depends on storage and IOPS. The price for storage between the volume types is the same. However, the price for IOPS varies depending on the number of IOPS that you provision. For more information, see Amazon EBS pricing, and choose the AWS Region that you're in from the Region dropdown list.Follow"
https://repost.aws/knowledge-center/ebs-io1-io2-block-express-differences
Why is an Amazon EMR step running even though my application on the YARN completed?
An Amazon EMR step is still in the RUNNING state even though the respective Apache Spark or YARN application completed.
"An Amazon EMR step is still in the RUNNING state even though the respective Apache Spark or YARN application completed.ResolutionUse one of the following methods to resolve the issue:Validate the status of respective YARN application and then end the step.Cancel the Step ID manually using the AWS Command Line Interface (AWS CLI).Validate the status of the YARN application and then end the step1.    Identify the YARN applicationId from the step logs stderr file. For more information, see How do I troubleshoot a failed step in Amazon EMR?2.    Connect to the primary node using SSH.3.    Use the following YARN command to find the state of the YARN application. In the following example command, replace application_id with your application ID. An example application ID is application_1234567891011_001.yarn application -status application_idOr, use the following YARN command to list all applications:yarn application -list -appStates ALL4.    Check the output of the preceding command for the state of the application.Application-States: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED]The following is example output of an application that completed successfully:Progress : 100%State : FINISHEDFinal-State : SUCCEEDEDIf the output of the preceding command has the states FINISHED, FAILED, or KILLED, then the YARN application is completed.If the application status is NEW, NEW_SAVING, SUBMITTED, ACCEPTED, or RUNNING then the YARN application is still running. Wait for the application to complete or end the application to cancel the step.7.    Run the following command to end the application. In the following example command, replace application_id with your application ID. An example application ID is application_1234567891011_001.yarn application -kill application_id8.    Check the status of the Amazon EMR step again after ending the application.Cancel the Amazon EMR step manually using the AWS CLINote: If you receive errors when running AWS CLI commands make sure that you’re using the most recent version of the AWS CLI.1.    Use the describe-step command to view the step's status. In the following command, replace cluster-id and step-id with the correct values for your use case.aws emr describe-step --cluster-id j-xxxxxxxxxxxxx --step-ids s-xxxxxxxx2.    Use the cancel-steps command to cancel the step. In the following command, replace cluster-id and step-id with the correct values for your use case.aws emr cancel-steps --cluster-id j-xxxxxxxxxxxxx \--step-ids s-3M8DXXXXXXXXX \--step-cancellation-option SEND_INTERRUPTFor more information, see Canceling steps.Follow"
https://repost.aws/knowledge-center/emr-troubleshoot-running-steps
How do I resolve "403 ERROR - The request could not be satisfied. Bad Request" in CloudFront?
Amazon CloudFront is returning the error message "403 ERROR - The request could not be satisfied. Bad Request."
"Amazon CloudFront is returning the error message "403 ERROR - The request could not be satisfied. Bad Request."Short descriptionThe error message "403 ERROR - The request could not be satisfied. Bad Request." is from the client. This error can occur due to one of the following reasons:The request is initiated over HTTP, but the CloudFront distribution is configured to allow only HTTPS requests. To resolve this issue, follow the steps in the Allow HTTP requests Resolution section.The requested alternate domain name (CNAME) isn't associated with the CloudFront distribution. To resolve this issue, follow the steps in the Associate a CNAME with a distribution Resolution section.Note: This resolution is for troubleshooting the error when you own the application or website that uses CloudFront to serve content to end users. If you receive this error when trying to view an application or access a website, then contact the provider or website owner for assistance.For information on troubleshooting other types of 403 errors, see How do I troubleshoot 403 errors from CloudFront?ResolutionAllow HTTP requestsFollow these steps:Open the Amazon CloudFront console.Choose the distribution that's returning the Bad Request error.Choose the Behaviors tab.Choose the behavior that matches the request. Then, choose Edit.For Viewer Protocol Policy, choose either HTTP and HTTPS or Redirect HTTP to HTTPS.Note: HTTP and HTTPS allows connections on both HTTP and HTTPS. Redirect HTTP to HTTPS automatically redirects HTTP requests to HTTPS.Choose Save Changes.Associate a CNAME with a distributionFollow these steps:Open the Amazon CloudFront console.Choose the distribution that's returning the Bad Request error.Choose the General tab.Under Settings, choose Edit.For Alternate Domain Names (CNAMEs), select Add Item.Enter the CNAME that you want to associate with the CloudFront distribution.Under Custom SSL certificate, choose the certificate that covers the domain. For more information, see How do I configure my CloudFront distribution to use an SSL/TLS certificate?Note: An SSL certificate is required to associate a CNAME with a distribution. For more information see, Requirements for using alternate domain names.Choose Save changes.Related informationHow CloudFront processes HTTP and HTTPS requestsHow do I resolve "403 Error - The request could not be satisfied. Request Blocked" in CloudFront?502 and 494 error: The request could not be satisfied by CloudFrontFollow"
https://repost.aws/knowledge-center/resolve-cloudfront-bad-request-error
How can I specify multiple Amazon S3 object prefixes for my Snowball export job?
"I want to specify multiple Amazon Simple Storage Service (Amazon S3) object key name prefixes for my AWS Snowball export job. However, when I create a Snowball export job with multiple prefixes, I see that only some files in the prefix range are copied."
"I want to specify multiple Amazon Simple Storage Service (Amazon S3) object key name prefixes for my AWS Snowball export job. However, when I create a Snowball export job with multiple prefixes, I see that only some files in the prefix range are copied.ResolutionSnowball export jobs support only one prefix range per S3 bucket. To copy all objects across multiple prefixes, you must specify the range to cover all the object key name prefixes that you want to copy. If you're using the AWS Command Line Interface (AWS CLI) to create an export job, then you must confirm that the BeginMarker and EndMarker that you specify within the KeyRange include all the prefixes that you want to copy.Note: The AWS CLI doesn't return an error when you specify multiple values for BeginMarker or EndMarker. However, doing so results in an incomplete copy operation.If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.If the prefix range (key range) that you specify results in an incomplete copy operation, you can do either of the following:Copy the remaining objects to another S3 bucket. You can choose to use the whole bucket to export the remaining data. Or, you can choose to export the remaining objects to a prefix range using a single value for BeginMarker or EndMarker. You're charged for the PUT, COPY, and LIST requests.If you can't copy the remaining objects to another S3 bucket, then you must cancel the incomplete export job. Then, create another export job to copy the remaining objects or prefixes. For example, create two export jobs (or more as needed) to include the desired objects or prefixes.Warning: This option costs more than copying the remaining objects to another bucket.Related informationUsing export rangesAmazon S3 pricingAWS Snowball pricingFollow"
https://repost.aws/knowledge-center/snowball-export-multiple-s3-prefixes
How do I deploy code from a CodeCommit repository to an Elastic Beanstalk environment?
I want to use AWS CodeCommit to deploy incremental code updates to an AWS Elastic Beanstalk environment without re-uploading my entire project.
"I want to use AWS CodeCommit to deploy incremental code updates to an AWS Elastic Beanstalk environment without re-uploading my entire project.ResolutionYou can use the Elastic Beanstalk Command Line Interface (EB CLI) to deploy your application directly from a CodeCommit repository.Install the EB CLI.Initialize a local Git repository.Create a CodeCommit repository.Note: You can also use the EB CLI to configure additional branches. Then, you can use an existing repository to deploy your code to your Elastic Beanstalk environment.Deploy your code from the CodeCommit repository.Related informationManaging environmentsFollow"
https://repost.aws/knowledge-center/deploy-codecommit-elastic-beanstalk
How can I identify what is blocking a query on a DB instance that is running Amazon RDS PostgreSQL or Aurora PostgreSQL?
"I tried to run a query on a DB instance that is running Amazon Relational Database Service (Amazon RDS) PostgreSQL or Amazon Aurora PostgreSQL. But the query was blocked, even though no other queries were executing at the same time. Why was the query blocked, and how do I resolve this issue?"
"I tried to run a query on a DB instance that is running Amazon Relational Database Service (Amazon RDS) PostgreSQL or Amazon Aurora PostgreSQL. But the query was blocked, even though no other queries were executing at the same time. Why was the query blocked, and how do I resolve this issue?ResolutionMost often, blocked queries are caused by uncommitted transactions. Uncommitted transactions can cause new queries to be blocked, to sleep, and to eventually fail when they exceed the lock wait timeout or the statement timeout. To resolve this issue, first identify the blocking transaction, and then stop the blocking transaction.1.    Identify the current state of the blocked transaction by running the following query against the pg_stat_activity table:SELECT * FROM pg_stat_activity WHERE query iLIKE '%TABLE NAME%' ORDER BY state;Note: Replace TABLE NAME with your own table name or condition.If the value of the wait_event_type column is Lock, the query is blocked by other transactions or queries. If the wait_event_type column has any other value, there is a performance bottleneck with resources such as CPU, storage, or network capacity. To resolve performance bottlenecks, tune the performance of your database, for example, by adding indexes, rewriting queries, or executing vacuum and analyze. For more information, see Best Practices for Working with PostgreSQL.If you enabled Performance Insights on your DB instance, you can also identify blocked transactions by viewing the DB load that is grouped by wait event, hosts, SQL queries, or users. For more information, see Using Amazon RDS Performance Insights.2.    If the value of the wait_event_type column is Lock, then you can identify the cause of the blocked transaction by running the following:SELECT blocked_locks.pid AS blocked_pid, blocked_activity.usename AS blocked_user, blocked_activity.client_addr as blocked_client_addr, blocked_activity.client_hostname as blocked_client_hostname, blocked_activity.client_port as blocked_client_port, blocked_activity.application_name as blocked_application_name, blocked_activity.wait_event_type as blocked_wait_event_type, blocked_activity.wait_event as blocked_wait_event, blocked_activity.query AS blocked_statement, blocking_locks.pid AS blocking_pid, blocking_activity.usename AS blocking_user, blocking_activity.client_addr as blocking_user_addr, blocking_activity.client_hostname as blocking_client_hostname, blocking_activity.client_port as blocking_client_port, blocking_activity.application_name as blocking_application_name, blocking_activity.wait_event_type as blocking_wait_event_type, blocking_activity.wait_event as blocking_wait_event, blocking_activity.query AS current_statement_in_blocking_process FROM pg_catalog.pg_locks blocked_locks JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid JOIN pg_catalog.pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid AND blocking_locks.pid != blocked_locks.pid JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid WHERE NOT blocked_locks.granted ORDER BY blocked_activity.pid;3.    Review the columns that have a blocking prefix. In the following example table that was generated by this query, you can see that the blocked transaction is running on the 27.0.3.146 host and using psql:blocked_pid | 9069blocked_user | masterblocked_client_addr | 27.0.3.146blocked_client_hostname |blocked_client_port | 50035blocked_application_name | psqlblocked_wait_event_type | Lockblocked_wait_event | transactionidblocked_statement | UPDATE test_tbl SET name = 'Jane Doe' WHERE id = 1;blocking_pid | 8740blocking_user | masterblocking_user_addr | 27.0.3.146blocking_client_hostname |blocking_client_port | 26259blocking_application_name | psqlblocking_wait_event_type | Clientblocking_wait_event | ClientReadcurrent_statement_in_blocking_process | UPDATE tset_tbl SET name = 'John Doe' WHERE id = 1;Tip: Use blocking_user, blocking_user_addr, and blocking_client_port to help identify which sessions are blocking transactions.Important: Before terminating transactions, evaluate the potential impact that each transaction has on the state of your database and your application.4.    After reviewing the potential impact of each transaction, stop the transactions by running the following query:SELECT pg_terminate_backend(PID);Note: Replace PID with blocking_pid of the process that you identified in step 3.Related informationPostgreSQL documentation for Viewing locksPostgreSQL documentation for Server signaling functionsPostgreSQL documentation for wait_event descriptionPostgreSQL Wiki for Lock monitoringAmazon Aurora PostgreSQL eventsFollow"
https://repost.aws/knowledge-center/rds-aurora-postgresql-query-blocked
How do I avoid asymmetry in route-based VPN with static routing?
I want to avoid asymmetric routing in route-based VPN that's configured for static routing.
"I want to avoid asymmetric routing in route-based VPN that's configured for static routing.Short descriptionAWS Site-to-Site VPN provides two endpoints per VPN connection to reach the same destination network. AWS uses either one of the active tunnels to route traffic to the same destination.VPN tunnels are usually hosted on "stateful" firewalls. The firewall devices expect a packet to use the same tunnel interface to send and receive traffic. Asymmetric routing occurs when the packet enters Amazon Virtual Private Cloud (Amazon VPC) through one tunnel and exits through the other tunnel on the same Site-to-Site VPN. When the packet returns through another tunnel interface, it doesn't match the "stateful" session and therefore gets dropped.ResolutionYou don’t need to create a new VPN with dynamic routing to address the problem of asymmetric routing. Instead, continue to use static routing after you’ve made changes to reflect the dynamic routing logic, as shown below.PrerequisiteConfirm that you have asymmetric routing by checking the Amazon CloudWatch metrics:View metrics for each tunnelIf you have only one VPN connection with Active/Active configuration:Open the CloudWatch console.In the navigation pane, choose Metrics.Under All metrics, choose the VPN metric namespace.Select VPN Tunnel Metrics.Select the CloudWatch metrics TunnelDataIn and TunnelDataOut. If there's asymmetric routing, one tunnel has data points for the metric TunnelDataIn. The second tunnel has data points for the metric TunnelDataOut.View metrics for the whole VPN connection (aggregate metrics)If you have multiple VPN connections:Open the CloudWatch console.In the navigation pane, choose Metrics.Under All metrics, choose the VPN metric namespace.Select VPN Connection Metrics.Select the CloudWatch metrics TunnelDataIn and TunnelDataOut. If there's asymmetric routing, one connection has data points for the metric TunnelDataIn. The other connection has data points for the metric TunnelDataOut.For more information on tunnel metrics, see Monitoring VPN tunnels using CloudWatch.Asymmetric routing scenariosReview the following options to avoid asymmetric routing in these scenarios:A single VPN connection configured as Active/ActiveTo avoid asymmetric routing:Use the IPsec aggregate feature if the customer gateway supports it. For more information, see IPsec aggregate for redundancy and tunnel load-balancing on the Fortinet website.If the customer gateway supports asymmetric routing, then make sure that asymmetric routing is turned on, on the virtual tunnel interfaces.If customer gateway doesn't support asymmetric routing, then make sure that the VPN setting is Active/Passive. This configuration identifies one tunnel as UP and the second tunnel as DOWN. In this setting, traffic from AWS to the on-premises network traverses only through the tunnel in the UP state. For more information, see How do I configure my Site-to-Site VPN to prefer tunnel A over tunnel B?Two VPN connections (VPN-Pry and VPN-Sec) connect to the same VPCIn this scenario, VPN connections connect to the same Amazon VPC, using the same virtual private gateway.Note: This scenario applies only to VPN connections with the virtual private gateway.Both connections:Use static routingAdvertise the same on-premises prefixes. For example, 10.170.0.0/20 and 10.167.0.0/20Connect to the same VPC through the virtual private gatewayHave different customer gateway public IPsImplement the following to avoid asymmetric routing:Static routes for VPN-Pry (primary connection):10.170.0.0/2110.170.8.0/2110.167.0.0/2110.167.8.0/21Static routes for VPN-Sec (secondary connection):10.170.0.0/2010.167.0.0/20In these settings, AWS chooses VPN-Pry as the preferred connection over VPN-Sec. AWS uses the longest prefix match in your route table that matches the traffic to determine how to route the traffic.Note: If your customer gateway doesn’t have asymmetric routing in this scenario, then configure each VPN setting as Active/Passive. Doing so identifies one tunnel as active per VPN connection. Traffic fails over to the active tunnel of the secondary connection if both tunnels of the active connection are down.For more information on VPN route priority, see Route tables and VPN route priority.Follow"
https://repost.aws/knowledge-center/vpn-avoid-asymmetry-static-routing
How do I change the subnet mask for the default subnet of my default Amazon VPC?
I want to change the subnet mask for the default subnet of my default virtual private cloud (VPC) in Amazon Virtual Private Cloud (Amazon VPC).
I want to change the subnet mask for the default subnet of my default virtual private cloud (VPC) in Amazon Virtual Private Cloud (Amazon VPC).Short descriptionYou can delete thedefault subnet for your default VPC and replace it with customized configurations.ResolutionDelete the default subnet for your default Amazon VPC in an AWS Region.Create a new subnet with your preferred subnet mask size. Be sure that the new subnet is part of your default VPC in the same AWS Region.Note: The first four IP addresses and the last IP address in each subnet CIDR block aren't available for your use. These IP addresses can't be assigned to an instance.Related informationDelete your default subnets and default VPCHow Amazon VPC worksFollow
https://repost.aws/knowledge-center/change-subnet-mask
How do I increase my security group rule quota in Amazon VPC?
I've reached the quota for "Rules per security group" or "Security groups per network interface" in my Amazon Virtual Private Cloud (Amazon VPC). How do I increase my security group quota in Amazon VPC?
"I've reached the quota for "Rules per security group" or "Security groups per network interface" in my Amazon Virtual Private Cloud (Amazon VPC). How do I increase my security group quota in Amazon VPC?ResolutionThe quota for "Security groups per network interface" multiplied by the quota for "Rules per security group" can't exceed 1,000. You can modify the quota for both so that the product of the two doesn't exceed 1,000.For more information on how to modify the default security group quota, see Amazon VPC quotas.Follow"
https://repost.aws/knowledge-center/increase-security-group-rule-limit
Why can't I see the instance metrics for my Amazon SageMaker endpoint though I can see the invocation metrics?
"I can see invocation metrics for my Amazon SageMaker endpoint in Amazon CloudWatch. However, instance metrics such as CPUUtilization, MemoryUtilization, and DiskUtilization are missing."
"I can see invocation metrics for my Amazon SageMaker endpoint in Amazon CloudWatch. However, instance metrics such as CPUUtilization, MemoryUtilization, and DiskUtilization are missing.ResolutionThis happens when the execution role for Amazon SageMaker doesn't have the PutMetricData permission for CloudWatch. To resolve the issue, add "cloudwatch:PutMetricData" to the AWS Identity and Access Management (IAM) policy that's attached to the execution role.To send instance metrics to CloudWatch, instances must assume the execution role. That's why the PutMetricData permission is required for instance metrics.Related informationMonitor Amazon SageMaker with Amazon CloudWatchSageMaker rolesFollow"
https://repost.aws/knowledge-center/sagemaker-instance-metrics
How do I troubleshoot and fix failing health checks for Application Load Balancers?
The targets registered to my Application Load Balancer aren't healthy. How do I find out why my targets are failing health checks?
"The targets registered to my Application Load Balancer aren't healthy. How do I find out why my targets are failing health checks?ResolutionTo troubleshoot and fix failing health checks for your Application Load Balancer:1.    Check the health of your targets to find the reason code and description of your issue.2.    Follow the resolution steps below for the error that you received.Elb.InitialHealthCheckingDescription: Initial health checks in progress.Resolution: Before a target can receive requests from the load balancer, that target must pass initial health checks. Wait for your target to pass the initial health checks, and then recheck its health status.Elb.RegistrationInProgressDescription: Target registration is in progress.Resolution: The load balancer starts routing requests to the target as soon as the registration process completes and the target passes the initial health checks.Target.DeregistrationInProgressDescription: Target deregistration is in progress.Resolution: When you deregister a target, the load balancer waits until in-flight requests are complete. This is known as the deregistration delay. By default, Elastic Load Balancing waits 300 seconds before completing the deregistration process. However, you can customize this value.If a deregistering target has no in-flight requests and no active connections, then Elastic Load Balancing immediately deregisters without waiting for the deregistration delay to elapse. The initial state of a deregistering target is draining. After the deregistration delay elapses, the deregistration process completes and the state of the target is unused. If the target is part of an Auto Scaling group, then it can be terminated and replaced.Target.FailedHealthChecksDescription: The load balancer received an error while establishing a connection to the target, or the target response was malformed.Resolution:Verify that your application is running. Use the service command to check the status of services on Linux targets. For Windows targets, check the Services tab of the Windows Task Manager. If the service is stopped, start the service. If the service isn't recognized, verify that the required service is installed.Verify that the target is listening for traffic on the health check port. You can use the ss command on Linux targets to verify which ports your server is listening on. For Windows targets, you can use the netstat command.Verify that your application responds to the load balancer's health check requests accordingly. The following example shows a typical health check request from the Application Load Balancer that your targets must return with a valid HTTP response. The Host header value contains the private IP address of the target, followed by the health check port. The User-agent is set to ELB-HealthChecker/2.0. The line terminator for message-header fields is the sequence CRLF, and the header terminates at the first empty line followed by a CRLF. If necessary, add a default virtual host to your web server configuration to receive the health check requests.GET / HTTP/1.1Host: 10.0.0.1:80Connection: closeUser-Agent: ELB-HealthChecker/2.0Accept-Encoding: gzip, compressedThe target type of your target group determines which network interface that the load balancer sends health checks to on the targets. For example, you can register instance IDs, IP addresses, and Lambda functions. If the target type is instance ID, then the load balancer sends health check requests to the primary network interface of the targets. If the target type is IP address, then the load balancer sends health check requests to the network interface associated with the corresponding IP address. If your targets have multiple interfaces attached, then verify that your application is listening on the correct network interface.The ELBSecurityPolicy-2016-08 security policy is used for target connections and HTTPS health checks. Verify that the target provides a server certificate and a key in the format specified in the security policy. Also verify that the target supports one or more matching ciphers and a protocol provided by the load balancer to establish the TLS handshake.Target.InvalidStateDescription: The target is in the stopped or terminated state.Resolution: If the target is an Amazon Elastic Compute Cloud (Amazon EC2) instance, open the Amazon EC2 console. Then, verify that the instance is running. Start the instance if necessary.Target.IpUnusableDescription: The IP address can't be used as a target because it's in use by a load balancer.Resolution: When you create a target group, you specify its target type. When the target type is IP, don't choose an IP address that's already in use by a load balancer.Target.NotInUseDescription: The target group isn't used by any load balancer or the target is in an Availability Zone that isn't enabled for its load balancer.Resolution:Check the target group and verify that it's configured to receive traffic from the load balancer.Verify that the Availability Zone of the target is enabled for the load balancer.Target.NotRegisteredDescription: The target isn't registered to the target group.Resolution: Verify that the target is registered to the target group.Target.ResponseCodeMismatchDescription: The health checks didn't return an expected HTTP code.Resolution:Success codes are the HTTP codes to use when checking for a successful response from a target. You can specify values or ranges of values between 200 and 499. The default value is 200. Check your load balancer health check configuration to verify which success codes that it's expecting to receive. Then, inspect your web server access logs to see if the expected success codes are being returned. Modify the success code value if necessary.Verify that the ping path is valid. The ping path is the destination on the targets for health checks. Be sure to specify a valid URI (/path?query). The default is /. Modify the ping path value if necessary.Target.TimeoutDescription: Request timed out.Resolution: If you can connect, then the target page might not respond before the health check timeout period. Most web servers, such as NGINX and IIS, let you log how long the server takes to respond. If your health check requests take longer than the configured timeout, you can:Choose a simpler target page for the health check.Adjust the health check settings.If you can't connect:Verify that the security group associated with the target allows traffic from the load balancer using the health check port and health check protocol. You can add a rule to the security group to allow all traffic from the load balancer security group. Also, the security group for your load balancer must allow traffic to the targets.Verify that the network ACL associated with the subnets for your target allows inbound traffic on the health check port. Verify that it also allows outbound traffic on the ephemeral ports (1024-65535).Verify that the network ACL associated with the subnets for your load balancer nodes allows inbound traffic on the ephemeral ports. Verify that it also allows outbound traffic on the health check and ephemeral ports.Verify that any OS-level firewalls on the target are allowing health check traffic in and out.Verify that the route table for the subnets associated with the target contains an entry that allows health check traffic back to the load balancer.Verify that the memory and CPU utilization of your targets are within acceptable limits. If your memory or CPU utilization is too high, add additional targets or increase the capacity of your Auto Scaling Group. If your target is an EC2 instance, you can also upgrade the instance to a larger instance type.Related informationTroubleshoot your Application Load BalancersFollow"
https://repost.aws/knowledge-center/elb-fix-failing-health-checks-alb
How do I troubleshoot issues with sending email messages in Amazon Cognito with default email settings compared to Amazon SES?
I need more information to resolve issues with sending email messages in Amazon Cognito with default settings compared to Amazon Simple Email Service (Amazon SES).
"I need more information to resolve issues with sending email messages in Amazon Cognito with default settings compared to Amazon Simple Email Service (Amazon SES).ResolutionTo determine whether Amazon Cognito sent an email message to a user's account, review the user's email configuration.In all scenarios, start by checking your email inbox and spam folder to verify that you didn't receive an email message from Amazon Cognito.Your troubleshooting steps depend on your email configuration in the Amazon Cognito console on the Messaging tab.If you choose Send email with Cognito in your setup, then Amazon Cognito uses an AWS owned account for email messages. With this setting, users don't have visibility into send metrics for that account. Users can open an AWS Support case to request access to Amazon SES to check the status of email messages to certain addresses.If you choose Send email with Amazon SES - Recommended in your setup, then users are actively using a verified email account in Amazon SES. Users can view Amazon CloudWatch logs to see if an email message was sent. When the sent email message is confirmed, users can explore other information in their accounts. To learn more about the different errors you might see in an Amazon SES CloudWatch log, see Amazon SES email sending errors.Follow"
https://repost.aws/knowledge-center/cognito-ses-default-email-settings
How do I hide my Lambda function's environment variables and unencrypted text from an IAM user?
I want to prevent AWS Identity and Access Management (IAM) users with access to my AWS Lambda function from seeing environment variables and unencrypted text. How do I do that?
"I want to prevent AWS Identity and Access Management (IAM) users with access to my AWS Lambda function from seeing environment variables and unencrypted text. How do I do that?ResolutionNote: The following solution prevents IAM identities from seeing a Lambda function's environment variables only in the Lambda console and the Lambda API. It doesn't prevent IAM identities from accessing decrypted environment variables using the function's code, or from outputting the environment variable values to Amazon CloudWatch Logs.To prevent IAM identities from accessing passwords, keys, or other sensitive information in your Lambda environment variables, do the following:Use an AWS Key Management Service (AWS KMS) customer managed key to encrypt the environment variables. To set up a KMS key, follow the instructions in Securing environment variables.Important: Make sure that you edit the key policy for the KMS key so that the policy denies access to the IAM identities that don't need access.KMS key policy example that denies specific IAM users permission to see Lambda environment variablesNote: Replace arn:aws:iam::1234567890:User1DeniedAccess and arn:aws:iam::1234567890:User2DeniedAccess with the Amazon Resource Names (ARNs) of IAM identities that you want to deny access. You can add more IAM ARNs to the key policy as needed.{ "Id": "MyCustomKey", "Version": "2012-10-17", "Statement": [ { "Sid": "Deny IAM users permission to see Lambda environment variables", "Effect": "Deny", "Principal": { "AWS": [ "arn:aws:iam::1234567890:User1DeniedAccess", "arn:aws:iam::1234567890:User2DeniedAccess" ] }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*" } ]}You receive an error message that the denied IAM user sees if they try to view the function's environment variables similar to the following:"Lambda was unable to decrypt your environment variables because the KMS access was denied. Please check your KMS permissions. KMS Exception: AccessDeniedException"Related informationCreating keysAWS Lambda permissionsFollow"
https://repost.aws/knowledge-center/lambda-environment-variables-iam-access
How can I use EC2Rescue to troubleshoot issues with my Amazon EC2 Windows instance?
I’m experiencing one of the following issues with my Amazon Elastic Compute Cloud (Amazon EC2) Windows instance:I can’t connect to my Amazon EC2 Windows instance.I am experiencing boot issues.I need to perform a restore action.I need to fix common issues such as a disk signature collision.I need to gather operating system (OS) logs for analysis and troubleshooting.How can I use EC2Rescue to resolve these issues?
"I’m experiencing one of the following issues with my Amazon Elastic Compute Cloud (Amazon EC2) Windows instance:I can’t connect to my Amazon EC2 Windows instance.I am experiencing boot issues.I need to perform a restore action.I need to fix common issues such as a disk signature collision.I need to gather operating system (OS) logs for analysis and troubleshooting.How can I use EC2Rescue to resolve these issues?Short descriptionEC2Rescue for EC2 Windows is a troubleshooting tool that you can run on your Amazon EC2 Windows Server instances. Use the tool to troubleshoot OS-level issues and to collect advanced logs and configuration files for further analysis. The following are some common issues that EC2Rescue can address:Instance connectivity issues due to firewall, Remote Desktop Protocol (RDP), or network interface configuration.OS boot issues due to a blue screen or stop error, a boot loop, or a corrupted registry.Any issues that might need advanced log analysis and troubleshooting.Note: You can capture a screenshot of an Amazon EC2 Windows instance to determine the state of the instance.You can run EC2Rescue manually or automatically using the AWS Systems Manager AWSSupport-ExecuteEC2Rescue Automation document.System requirementsEC2Rescue requires an Amazon EC2 Windows instance that:Runs on Windows Server 2008 R2 or laterHas .NET Framework 3.5 SPI or later installedIs accessible from an RDP connectionNote: EC2Rescue runs only on Windows Server 2008 R2 or later, but the tool can analyze the offline volumes of Windows Server 2008 or later.ResolutionFirst, choose whether you want to use the Systems Manager AWSSupport-ExecuteEC2Rescue Automation document, or run EC2Rescue manually. Then, follow the steps below for your chosen method.Use the Systems Manager AWSSupport-ExecuteEC2Rescue Automation documentThe AWSSupport-ExecuteEC2Rescue Automation document combines AWS Lambda functions with Systems Manager and AWS CloudFormation actions to automate EC2Rescue steps. For more information about how the document works, permissions requirements, and prerequisites for using the tool, see Run the EC2Rescue tool on unreachable instances.Important: The Automation workflow stops the instance. If the instance has an instance store volume, any data on the volume is lost when the instance stops. If you’re not using an Elastic IP address, the public IP address releases when the instance stops.When you're ready, run the Systems Manager AWSSupport-ExecuteEC2Rescue Automation.Run EC2Rescue manuallyYou can run EC2Rescue manually using one of the following methods:Use EC2Rescue for Windows Server GUI.Use the EC2Rescue for Windows Server command line interface (CLI).Use the AWSSupport-RunEC2RescueForWindowsTool Systems Manager Run Command.First, download EC2Rescue on your Amazon EC2 Windows instance.Note: The AWSSupport-RunEC2RescueForWindowsTool Systems Manager Run Command document method downloads and verifies EC2Rescue for Windows Server for you.Then, use EC2Rescue to troubleshoot Amazon EC2 Windows Server instance issues:Instance connectivity issues: Use the Diagnose and Rescue feature in Offline instance mode.OS boot issues: Use the Restore feature in Offline instance mode.Advanced logs and troubleshooting: Use the Capture logs feature in either Current instance mode or Offline instance mode.Current instance modeThis mode analyzes the instance that EC2Rescue is currently running. Current instance mode is read-only and doesn’t modify the current instance, so this mode doesn’t directly fix any issues. Use Current instance mode to gather system information and logs for analysis or for submission to system administrators or AWS Support.FeaturesSystem Information: Displays important system information about the current system in a text box for easy copying.Capture logs: First, select from a list of relevant troubleshooting logs. This feature then automatically gathers and packages those logs into a zipped folder under the name and location that you specify.Offline instance modeThis mode allows you to select the volume of an offline system. EC2Rescue analyzes the volume and presents automated rescue and restore options. Offline instance mode also includes the same Capture logs feature as Current instance mode.FeaturesSystem Information: Displays important system information about the current system in a text box for easy copying.Select Disk: If multiple offline root volumes are connected to the instance, this feature allows you to select a specific volume.Note: If the selected disk isn’t already online, this feature automatically brings the disk online for you.Diagnose and Rescue: Detects and provides options to automatically fix common configuration issues that prevent RDP connections or that cause instance status checks to fail. The following items are inspected for possible configuration issues:System time settingsWindows Firewall settingsRemote Desktop settingsEC2Config version and settings (Windows Server 2012 R2 and earlier)EC2Launch version and settings (Windows Server 2016 and later)Network interface settingsRestore: Set the offline instance to boot to Last Known Good Configuration or Restore registry from backup. Use this feature if you suspect an improperly configured or corrupted registry.Capture logs: First, select from a list of relevant troubleshooting logs. This feature then automatically gathers and packages those logs into a zipped folder under the name and location that you specify.Related informationUse EC2Rescue for Windows ServerUse EC2Rescue for LinuxTroubleshoot EC2 Windows instancesFollow"
https://repost.aws/knowledge-center/ec2rescue-windows-troubleshoot
How can I delete an Elastic Beanstalk environment that's out of sync with a deleted Amazon RDS database?
"When I try to delete my AWS Elastic Beanstalk environment, I get the following error message in my environment's event stream: "Deleting RDS database named: xxxxxxxxx failed Reason: DBInstance xxxxxxxxx was not found during DescribeDBInstances." Next, I get another error message: "Stack deletion failed: The following resource(s) failed to delete: [AWSEBRDSDatabase]."How can I resolve these errors and delete my Elastic Beanstalk environment?"
"When I try to delete my AWS Elastic Beanstalk environment, I get the following error message in my environment's event stream: "Deleting RDS database named: xxxxxxxxx failed Reason: DBInstance xxxxxxxxx was not found during DescribeDBInstances." Next, I get another error message: "Stack deletion failed: The following resource(s) failed to delete: [AWSEBRDSDatabase]."How can I resolve these errors and delete my Elastic Beanstalk environment?Short DescriptionYou receive this error if you delete an Amazon Relational Database Service (Amazon RDS) database that was created as part of your Elastic Beanstalk environment. The lifecycle of that database is tied to your Elastic Beanstalk environment. If you delete that database from the Amazon RDS console (called an out-of-band deletion), then Elastic Beanstalk gets out of sync with your database resource and can't be deleted.Resolution1.    Open the AWS CloudFormation console.2.    In the navigation pane, choose Stacks.3.    In the Stack name column, select the stack for the Elastic Beanstalk environment that you want to delete.Note: In the Status column for your stack, you should see DELETE_FAILED.Tip: You can identify your stack by verifying that the environment ID from the Description column in the AWS CloudFormation console matches the environment ID of your Elastic Beanstalk environment.4.    Choose Delete.5.    In the pop-up window, select the AWSEBRDSDatabase check box in the Resources to retain - optional section, and then choose Delete Stack.Note: AWSEBRDSDatabase is the name of the resource that you want to retain, or skip, when you delete the stack. If you skip this database resource, the stack can delete successfully.Tip: You can also use the AWS Command Line Interface (AWS CLI) to delete a stack with the following command:aws cloudformation delete-stack --stack-name YourStackName --retain-resources AWSEBRDSDatabase6.    After the stack changes to DELETE_COMPLETE status, terminate your Elastic Beanstalk environment.Follow"
https://repost.aws/knowledge-center/elastic-beanstalk-deleted-rds-database
How do I resolve the "Access Denied" error in Kinesis Data Firehose when writing to an Amazon S3 bucket?
"I'm trying to write data from Amazon Kinesis Data Firehose to an Amazon Simple Storage Service (Amazon S3) bucket that's encrypted by AWS Key Management Service (AWS KMS). However, I receive an "Access Denied" error message. How do I resolve this?"
"I'm trying to write data from Amazon Kinesis Data Firehose to an Amazon Simple Storage Service (Amazon S3) bucket that's encrypted by AWS Key Management Service (AWS KMS). However, I receive an "Access Denied" error message. How do I resolve this?ResolutionImportant: Make sure that the AWS Identity and Access Management (IAM) role for Kinesis Data Firehose has relevant Amazon S3 permissions. For more information about S3 permissions, see Grant Kinesis Data Firehose access to an Amazon S3 destination.To resolve the "Access Denied" error message in Kinesis Data Firehose, perform the following steps:1.    Open the AWS KMS console.2.    Choose the KMS key that is currently used to encrypt your S3 bucket.3.    Choose Switch to policy view.4.    Check that you have the required permissions in the KMS key policy. Proper access allows you to encrypt data that is written to your S3 bucket.Note: For more information about KMS key policies, see Protecting data using server-side encryption with KMS keys stored in AWS Key Management Service (SSE-KMS).5.    Update your policy, granting Kinesis Data Firehose access to the KMS key:{ "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<account-ID>:role/<FirehoseRole>" }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "<ARN of the KMS key>"}Be sure to specify the Amazon Resource Name (ARN) of the KMS key that encrypted your S3 bucket.6.    Choose Save.You can also resolve the "Access Denied" error message without modifying the policy. To resolve the error message, perform the following steps:1.    Open the AWS KMS console.2.    Choose the KMS key that is currently being used to encrypt your S3 bucket.3.    In the Key users section, choose Add.4.    Select your Kinesis Data Firehose role.5.    Choose Add. You now have the proper permissions to write data from Kinesis Data Firehose to the encrypted S3 bucket.Related informationEditing keysFollow"
https://repost.aws/knowledge-center/kinesis-data-firehose-s3-access-error
How can I run an Amazon ECS task on Fargate in a private subnet?
I want to run an Amazon Elastic Container Service (Amazon ECS) task on AWS Fargate in a private subnet.
"I want to run an Amazon Elastic Container Service (Amazon ECS) task on AWS Fargate in a private subnet.Short descriptionYou can run Fargate tasks in private subnets. However, based on your use case, you might require internet access for certain operations, such as pulling an image from a public repository. Or, you might want to prevent any internet access for your tasks.To run Fargate tasks in a private subnet without internet access, use VPC endpoints. VPC endpoints allow you to run Fargate tasks without granting the tasks access to the internet. The required endpoints are accessed over a private IP address.If you need your task to access the internet from a private subnet, grant internet access using a NAT Gateway. The required endpoints are accessed over the public IP address of the NAT gateway.ResolutionCreate a VPCCreate an Amazon Virtual Private Cloud (Amazon VPC) with public or private subnets.Then, depending on your use case, follow the steps in Use a private subnet without internet access (VPC endpoints method) or Use a Private subnet with internet access sections of this article.Use a private subnet without internet access (VPC endpoints method)To create interface endpoints and an S3 gateway:Create an S3 gateway endpoint.Create ECR interface endpoints.If your task uses Secrets Manager to inject secrets into the task and CloudWatch Logs, create interface endpoints for Secrets Manager and CloudWatch Logs.Then, follow the instructions in the Create an Amazon ECS cluster and service section of this article.Use a private subnet with internet accessCreate a NAT gateway.When you create your NAT gateway, be sure that you:Place your NAT gateway inside the public subnet.Update the route table of the private subnet. For Destination, enter 0.0.0.0/0. For Target, select the ID of your NAT gateway.Then, follow the instructions in the Create an Amazon ECS cluster and service section of this article.Create an Amazon ECS cluster and serviceCreate an Amazon ECS cluster using the Networking only template (powered by Fargate).Create an Amazon ECS service.When you configure the network for the service, be sure that you:Choose the cluster that you created in step 1 for your cluster VPC.Based on the method that you chose earlier, choose the private subnet that you configured for the VPC endpoints, or the subnet that you configured for the NAT gateway.Now, your new tasks will launch in the private subnet.Follow"
https://repost.aws/knowledge-center/ecs-fargate-tasks-private-subnet
What S3 bucket policy should I use to comply with the AWS Config rule s3-bucket-ssl-requests-only?
I enabled the AWS Config rule s3-bucket-ssl-requests-only to be sure that my Amazon Simple Storage Service (Amazon S3) bucket policies require encryption during data transit. How can I create bucket policies that comply with this rule?
"I enabled the AWS Config rule s3-bucket-ssl-requests-only to be sure that my Amazon Simple Storage Service (Amazon S3) bucket policies require encryption during data transit. How can I create bucket policies that comply with this rule?ResolutionNote: Amazon S3 offers encryption in transit and encryption at rest. Encryption in transit refers to HTTPS and encryption at rest refers to client-side or server-side encryption.Amazon S3 allows both HTTP and HTTPS requests. By default, requests are made through the AWS Management Console, AWS Command Line Interface (AWS CLI), or HTTPS.To comply with the s3-bucket-ssl-requests-only rule, confirm that your bucket policies explicitly deny access to HTTP requests. Bucket policies that allow HTTPS requests without explicitly denying HTTP requests might not comply with the rule.To determine HTTP or HTTPS requests in a bucket policy, use a condition that checks for the key "aws:SecureTransport". When this key is true, then request is sent through HTTPS. To comply with the s3-bucket-ssl-requests-only rule, create a bucket policy that explicitly denies access when the request meets the condition "aws:SecureTransport": "false". This policy explicitly denies access to HTTP requests.Bucket policy that complies with s3-bucket-ssl-requests-only ruleFor example, the following bucket policy complies with the rule. The policy explicitly denies all actions on the bucket and objects when the request meets the condition "aws:SecureTransport": "false":{ "Id": "ExamplePolicy", "Version": "2012-10-17", "Statement": [ { "Sid": "AllowSSLRequestsOnly", "Action": "s3:*", "Effect": "Deny", "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET", "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" ], "Condition": { "Bool": { "aws:SecureTransport": "false" } }, "Principal": "*" } ]}Bucket policy that doesn't comply with s3-bucket-ssl-requests-only ruleIn contrast, the following bucket policy doesn't comply with the rule. Instead of using an explicit deny statement, the policy allows access to requests that meet the condition "aws:SecureTransport": "true". This statement allows anonymous access to s3:GetObject for all objects in the bucket if the request uses HTTPS. Avoid this type of bucket policy unless your use case requires anonymous access through HTTPS.{ "Id": "ExamplePolicy", "Version": "2012-10-17", "Statement": [ { "Sid": "NOT-RECOMMENDED-FOR__AWSCONFIG-Rule_s3-bucket-ssl-requests-only", "Action": "s3:GetObject", "Effect": "Allow", "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" ], "Condition": { "Bool": { "aws:SecureTransport": "true" } }, "Principal": "*" } ]}Related informationHow to use bucket policies and apply defense-in-depth to help secure your Amazon S3 dataFollow"
https://repost.aws/knowledge-center/s3-bucket-policy-for-config-rule
Why didn't I receive a domain transfer confirmation or authorization email from Route 53?
"I'm trying to transfer my domain from another registrar to Amazon Route 53, but I didn't receive the confirmation or authorization emails. Why didn't I receive these emails, and how can I resend them so that I can transfer my domain?"
"I'm trying to transfer my domain from another registrar to Amazon Route 53, but I didn't receive the confirmation or authorization emails. Why didn't I receive these emails, and how can I resend them so that I can transfer my domain?Short descriptionIf you didn't receive the confirmation or authorization emails from Route 53, you can send a request to resend them. Before submitting a request to resend these emails, be sure that:Privacy protection features for your domain are turned offYour domain's current registrar has the correct contact informationResolutionVerify that the privacy protection feature for your domain is turned off with your current registrar.If necessary, update your registrant contact email address with your domain's current registrar.Resend your confirmation or authorization emails.Complete your domain registration transfer.Note: Due to General Data Protection Regulation (GDPR), it might not be possible to use WHOIS to determine your registrant contact information. If you're unable to transfer your domain due to GDPR, contact AWS Support.Related informationTransferring a domain to a different AWS accountTransferring registration for a domain to Amazon Route 53Contacting AWS Support about domain registration issuesFollow"
https://repost.aws/knowledge-center/route53-domain-transfer
How do I troubleshoot an Amazon SES configuration set that isn't working?
My Amazon Simple Email Service (Amazon SES) configuration set isn’t publishing the events that are specified in its rules. How do I troubleshoot this?
"My Amazon Simple Email Service (Amazon SES) configuration set isn’t publishing the events that are specified in its rules. How do I troubleshoot this?ResolutionApply a configuration setFirst, make sure that your configuration set is properly applied. To apply a configuration set to emails that you want to monitor, pass the name of the configuration set in the email header X-SES-CONFIGURATION-SET. For example:X-SES-CONFIGURATION-SET: example_configuration_set_nameBe sure to replace example_configuration_set_name with the name of the configuration set that you want to use. Or, you can assign a default configuration set to a specific verified identity. This makes sure that messages that are sent from the identity use the assigned configuration set.Note: If the verified identity in the FROM field has a default configuration that conflicts with a configuration set specified in the X-SES-CONFIGURATION-SET header, then the header’s configuration set takes precedence.To troubleshoot configuration sets that aren't working, verify that the configuration set is passed in the headers of the relevant emails. For more information on passing the configuration set as a header when using different methods for sending emails, see Specifying a configuration set when you send email.Amazon CloudWatch destinationIf you have issues with an Amazon SES configuration set to publish metrics to Amazon CloudWatch, then verify that the value source is configured correctly:Message tag: The message tag is a key-value pair that contains the dimension name that CloudWatch uses to pull events. For CloudWatch to detect the events, the tags must be specified as an email header using the X-SES-MESSAGE-TAGS header.Email header: With this value source, Amazon SES retrieves the dimension name and values from the email headers. However, you can't use these email headers as the dimension names: Received, To, From, DKIM-Signature, CC, message-id, or Return-Path.Link Tag: Link tags are key-value pairs. They’re used for publishing click events to CloudWatch for an email campaign that has embedded links. Be sure to verify that the link tag is configured correctly in both the embedded links and the configuration set.Amazon Simple Notification Service (Amazon SNS) DestinationIf you have issues with an Amazon SES configuration set to publish SES events to an SNS topic, then verify the following:The SNS topic is a Standard type topic. SES doesn’t support FIFO type topics.The SNS topic has an endpoint subscribed to the topic, and the correct endpoint is referenced for the published events.Note: Some endpoints have prerequisites that must be met before event publishing can begin. For instance, an Email/Email-JSON endpoint requires that you confirm the subscription before the email address can receive messages. Therefore, note the relevant requirements for each endpoint type while configuring your SNS topic.SES has the required permissions to publish notifications to your topic.If your SNS topic has encryption activated and uses an AWS KMS key, then verify that SES has the following KMS permissions on the key in use: kms:GenerateDataKeykms:DecryptAmazon Kinesis Data Firehose destinationIf you have issues with an Amazon SES configuration set to publish SES events to a Kinesis Data Firehose, then verify the following:The correct Delivery stream is being referenced for the published SES events.The AWS Identity and Access Management (IAM) role that Amazon SES uses has permissions to publish to your Kinesis Data Firehose delivery stream.Amazon Pinpoint destinationIf you have issues with an Amazon SES configuration set to publish SES events to Amazon Pinpoint, note that Amazon Pinpoint doesn’t support Delivery delays or Subscriptions event types.Follow"
https://repost.aws/knowledge-center/ses-troubleshoot-configuration-set
How do I troubleshoot time issues with my EC2 Windows instance?
I want to permanently change the time settings on my Amazon Elastic Compute Cloud (Amazon EC2) Windows instance to my local time zone. I’m unable to change the time and date on my Instance. How do I troubleshoot this?
"I want to permanently change the time settings on my Amazon Elastic Compute Cloud (Amazon EC2) Windows instance to my local time zone. I’m unable to change the time and date on my Instance. How do I troubleshoot this?Short descriptionThe following are common time-related issues that occur on Windows instances:Unable to change the time using Systems Settings or the Control Panel due to the following reasons:The Set Time Zone Automatically option is greyed out.The You do not have permission to perform this task error displays when trying to change time settings using the Control Panel.The time change doesn't persist after system reboots.The Amazon Time Sync Service is behind other atomic clocks by X minutes.It's a best practice to use Coordinated Universal Time (UTC) for your instances to avoid human error. Using UTC on your instances also facilitates synchronization across your AWS CloudWatch logs, metrics, local logs, and other services. You can use a different time zone to suit your requirements, if needed.ResolutionUnable to change the time using Systems Settings or the Control PanelAmazon provides the Amazon Time Sync Service, which is accessible from all EC2 instances. If you can't change the time zone and time/date, use the Command Prompt window to configure Amazon Time Sync Service on your instance.Before you begin, use the following steps to verify that the Prohibit access to Control Panel and PC settings policy is disabled in the Local Group Policy Editor:Open the Local Group Policy Editor.Select User Configuration, Administrative Templates, Control Panel.Highlight Prohibit access to Control Panel and PC settings, and then select Edit policy setting.Select Disabled.Change the time zone using Command PromptAfter verifying the policy setting, you can change the time zone from the Command Prompt window.Change the time and date settings using Command PromptRun a Command Prompt window as Administrator.Enter time or date in the Command Prompt window, then select OK.Enter the new time or date at the prompt.Enter a new time in HH:MM:SS AM/PM format. For example, 08:35:00 AM.Enter a new date in mm-dd-yyyy format. For example, 01-01-2021.The new time and date settings take effect immediately.Alternatively, you can use external network time protocol (NTP) sources. For more information, see Configure network time protocol (NTP).Note: Because the Citrix Xen Guest Agent service might cause issues with time synchronization, it's a best practice to update Citrix PV drivers to Amazon PV drivers.The time change doesn't persist after system rebootsIf you're running Windows Server 2008 or later, add a RealTimeIsUniversal registry key to make the new time persist after reboot.If your instance is domain joined to an AWS Managed Microsoft AD directory, change the time settings on your instance to use the domain controller as the time source to avoid a time skew. Skewing the time breaks authentication due to Kerberos restrictions. This might cause issues logging in to the instance. To prevent this, make sure that the RealTimeIsUniversal registry key is enabled before rebooting your instance.The Amazon Time Sync Service is behind other atomic clocks by X minutesTo resync the Amazon Time Sync Service to your instance, do the following:1.    Run the following command to reset the NTP server to point to the Amazon Time Sync Service server:w32tm /config /manualpeerlist:”169.254.169.123,0x9” /syncfromflags:manual /update2.    Run the following commands:net stop w32timew32tm /unregister3.    From the Start menu on the instance, select Run, and enter services.msc. Verify that Windows Time is deleted.4.    Run the following commands:W32tm /registerNet start w32timew32tm /query /configuration /verbosew32tm /resync /rediscover and w32tm /resync /forcew32tm /query /status /verbosew32tm /stripchart /computer:169.254.169.123 /period:5w32tm /query /sourceNote: If you see the local CMOS clock, wait a few minutes and then run the w32tm /query /source command again to verify the source.Related informationSet the time for a Windows instanceDefault network time protocol (NTP) settings for Amazon Windows AMIsFollow"
https://repost.aws/knowledge-center/ec2-windows-time-service
What permissions do I need to access an Amazon SQS queue?
I want to access an Amazon Simple Queue Service (Amazon SQS) queue. What SQS access policy and AWS Identity and Access Management (IAM) policy permissions are required to access the queue?
"I want to access an Amazon Simple Queue Service (Amazon SQS) queue. What SQS access policy and AWS Identity and Access Management (IAM) policy permissions are required to access the queue?ResolutionTo access an Amazon SQS queue, you must add permissions to the SQS access policy, the IAM policy, or both. The specific permissions requirements differ depending on whether the SQS queue and IAM role are from the same account.Same accountA statement to allow access is required in either the SQS access policy or the IAM policy.Note: If either the SQS access policy or IAM policy explicitly allows access, but the other policy explicitly denies access, access to the queue is denied.IAM user policySQS access policyResultAllowAllowAllowAllowNeither Allow nor DenyAllowAllowDenyDenyNeither Allow nor DenyAllowAllowNeither Allow nor DenyNeither Allow nor DenyImplicit DenyNeither Allow nor DenyDenyDenyDenyAllowDenyDenyNeither Allow nor DenyDenyDenyDenyDenyDifferent accountA statement to allow access is required in both the SQS access policy and the IAM policy.IAM user policySQS access policyResultAllowAllowAllowAllowNeither Allow nor DenyImplicit DenyAllowDenyDenyNeither Allow nor DenyAllowImplicit DenyNeither Allow nor DenyNeither Allow nor DenyImplicit DenyNeither Allow nor DenyDenyDenyDenyAllowDenyDenyNeither Allow nor DenyDenyDenyDenyDenyExample policy statementsThe following example policies show the permissions that you must set on the IAM policy and SQS queue access policy to allow cross-account access for an SQS queue.The first policy grants permissions for username1 to send messages to the resource arn:aws:sqs:us-east-1:123456789012:queue_1.The second policy allows username1 to send messages to the queue.For more information on these these policies, see IAM policy types: How and when to use them.Example IAM policy statement for username1{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": "sqs:SendMessage", "Resource": "arn:aws:sqs:us-east-1:123456789012:queue_1" }]}Example SQS resource policy statement for queue_1{ "Version": "2012-10-17", "Id": "Queue1_Policy", "Statement": [{ "Sid":"Queue1_AllActions", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111122223333:user/username1" ] }, "Action": "sqs:SendMessage", "Resource": "arn:aws:sqs:us-east-1:123456789012:queue_1" }]}Follow"
https://repost.aws/knowledge-center/sqs-queue-access-permissions
How do I configure my API Gateway REST API to pass query string parameters to a backend Lambda function or HTTP endpoint?
I need my Amazon API Gateway REST API to pass query string parameters to a backend AWS Lambda function and an HTTP endpoint.
"I need my Amazon API Gateway REST API to pass query string parameters to a backend AWS Lambda function and an HTTP endpoint.Short descriptionTo configure a REST API to pass query string parameters to a backend AWS Lambda function, use a Lambda custom integration.To pass query string parameters to an HTTP endpoint, use an HTTP custom integration.Important: Make sure that you supply the input data as the integration request payload. It’s a best practice to use a mapping template to supply the payload. For more information, see Map request and response payloads between method and integration.ResolutionPassing query string parameters to a backend Lambda function1.    Open the API Gateway console, and then choose your API.2.    In the Resources pane, choose the configured HTTP method.Note: If there’s more than one HTTP method configured for the API, then repeat steps 2 through 15 for each method.3.    In the Method Execution pane, choose Method Request.4.    Expand the URL Query String Parameters dropdown list, then choose Add query string.5.    For the Name field, enter pet, and then choose the check mark icon.6.    Choose the Required check box.7.    Choose the Method Execution pane.8.    Choose Integration Request.9.    Choose the Mapping Templates dropdown list, and then choose Add mapping template.10.    For the Content-Type field, enter application/json and then choose the check mark icon.11.    In the pop-up that appears, choose Yes, secure this integration.12.    For Request body passthrough, choose When there are no templates defined (recommended).13.    In the mapping template editor, copy and replace the existing script with the following code:{ "pet": "$input.params('pet')"}Note: For more information, see the $input variables.14.    Choose Save, and then choose Deploy the API.15.    To test the API's new endpoint, run the following curl command:curl -X GET https://jp58lnf5vh.execute-api.us-west-2.amazonaws.com/dev/lambda-non-proxy?pet=dogImportant: Make sure that the curl command has the query string parameter pet=dog.Passing query string parameters to an HTTP endpoint1.    Open the API Gateway console, and then choose your API.2.    In the Resources pane, choose the configured HTTP method.Note: If there’s more than one HTTP method configured for the API, then repeat steps two through 10 for each method.3.    In the Method Execution pane, choose Method Request.4.    Expand the URL Query String Parameters dropdown list, and then choose Add query string.5.    For the Name field, enter type, and then choose the check mark icon.6.    Choose the Method Execution pane.7.    Choose Integration Request.8.    Expand the URL Query String Parameters section.Note: By default, the method request query string parameters are mapped to the like-named integration request query string parameters. To view this, refresh the API Gateway console page. To map a method request parameter to a different integration request parameter, first delete the existing integration request parameter. Then, add a new query string with the desired method request parameter mapping expression.9.    Choose Save, then choose Deploy the API.10.    To test the API’s new endpoint, run the following curl command:curl -X GET https://jp58lnf5vh.execute-api.us-west-2.amazonaws.com/dev/http-endpoint?pet=dogImportant: Make sure that the curl command has the query string parameter pet=dog.Related informationTutorial: Build a Hello World REST API with Lambda proxy integrationTutorial: Build an API Gateway REST API with Lambda non-proxy integrationTutorial: Build a REST API with HTTP proxy integrationTutorial: Build a REST API with HTTP non-proxy integrationFollow"
https://repost.aws/knowledge-center/pass-api-gateway-rest-api-parameters
How do I launch and troubleshoot Spot Instances using Amazon EKS managed node groups?
I want to create a managed node group with Spot capacity for my Amazon Elastic Kubernetes Service (Amazon EKS) cluster and troubleshoot issues.
"I want to create a managed node group with Spot capacity for my Amazon Elastic Kubernetes Service (Amazon EKS) cluster and troubleshoot issues.Resolution1.    Install eksctl.Important: Make sure to check all AWS Command Line Interface (AWS CLI) commands before using them and replace instances of example strings with your values. For example, replace example_cluster with your cluster name.2.    Create a managed node group with Spot capacity in your existing cluster by running the following command:#eksctl create nodegroup --cluster=<example_cluster> --spot --instance-types=<Comma-separated list of instance types> --region <EKS cluster AWS region. Defaults to the value set in your AWS config (~/.aws/config)>Example:#eksctl create nodegroup --cluster=demo --spot --instance-types=c3.large,c4.large,c5.large --region us-east-1Note: There are other flags you can set up while creating a Spot managed node group, like --name, --nodes, --nodes-min, and --nodes-max. Get the complete list of all available flags by running the following command:#eksctl create nodegroup --help3.    If you maintain an eksctl ClusterConfig config file for your cluster, then you can also create a Spot managed node group with that file. Create Spot Instances using managed node groups with a spot-cluster.yaml config fileby running the following command:apiVersion: eksctl.io/v1alpha5kind: ClusterConfigmetadata: name: <example_cluster> region: <example_region>managedNodeGroups:- name: spot instanceTypes: ["c3.large","c4.large","c5.large","c5d.large","c5n.large","c5a.large"] spot: true4.    Create a node group using the config file by running the following command:# eksctl create nodegroup -f spot-cluster.yamlTroubleshooting issues related to Spot Instances in Amazon EKSCheck the health of a managed node group by using eksctl or the Amazon EKS console as follows:$ eksctl utils nodegroup-health --name=<example_nodegroup> --cluster=<example_cluster>The health of Spot managed node groups can degrade with an error, due to a lack of Spot capacity for used instance types. See the following error for example:AsgInstanceLaunchFailures Could not launch Spot Instances. UnfulfillableCapacity - Unable to fulfill capacity due to your request configuration. Please adjust your request and try again. Launching EC2 instance failed.Note: To successfully adopt Spot Instances, it's a best practice to implement Spot Instance diversification as part of your Spot managed node group configuration. Spot Instance diversification helps to get capacity from multiple Spot Instance pools. Getting this capacity is for both scaling up and replacing Spot Instances that might receive a Spot Instance termination notification.If your cluster Spot node groups must be provisioned with instance types that adhere to a 1 vCPU:4 GB of RAM ratio, then diversify your Spot Instance pools. Diversify your instance pools by using one of the following strategies:Create multiple node groups with each having different sizes. For example, a node group of size 4 vCPUs and 16 GB of RAM, and another node group of 8 vCPUs and 32 GB of RAM.Implement instance diversification within the node groups. Do this by selecting a mix of instance types and families from different Spot Instance pools that meet the same vCPUs and memory criteria.Use amazon-ec2-instance-selector to select the relevant instance types and families with sufficient number of vCPUs and RAM by running the following command:curl -Lo ec2-instance-selector https://github.com/aws/amazon-ec2-instance-selector/releases/download/v2.0.3/ec2-instance-selector-`uname | tr '[:upper:]' '[:lower:]'`-amd64 && chmod +x ec2-instance-selectorsudo mv ec2-instance-selector /usr/local/bin/ec2-instance-selector --versionExample:ec2-instance-selector --vcpus 4 --memory 16 --gpus 0 --current-generation -a x86_64 --deny-list '.*[ni].*'The preceding command displays a list similar to the following. Use these instances as part of one of your node groups.m4.xlargem5.xlargem5a.xlargem5ad.xlargem5d.xlarget2.xlarget3.xlarget3a.xlargeNote: Instance types of existing node groups can't be changed using Amazon EKS API. It's a best practice to create a new Spot node group with your desired instance types. Enter the following into the eksctl create nodegroup command. Note the new eksctl flag to indicate that a node group runs Spot Instances: --spot.$eksctl create nodegroup --cluster=<example_cluster> --spot --instance-types m5.xlarge,m4.xlarge,m5a.xlarge,m5d.xlarge,m5n.xlarge,m5ad.xlarge --region <example_region>Follow"
https://repost.aws/knowledge-center/eks-troubleshoot-spot-instances
How do I stop and start Amazon EC2 instances at regular intervals using Lambda?
I want to reduce my Amazon Elastic Compute Cloud (Amazon EC2) usage by stopping and starting my EC2 instances automatically.
"I want to reduce my Amazon Elastic Compute Cloud (Amazon EC2) usage by stopping and starting my EC2 instances automatically.Short descriptionYou can use AWS Lambda and Amazon EventBridge to automatically stop and start EC2 instances.Note: The following resolution is a simple solution. For a more advanced solution, use the AWS Instance Scheduler. For more information, see Automate starting and stopping AWS instances.To use Lambda to stop and start EC2 instances at regular intervals, complete the following steps:1.    Create a custom AWS Identity and Access Management (IAM) policy and execution role for your Lambda function.2.    Create Lambda functions that stop and start your EC2 instances.3.    Test your Lambda functions.4.    Create EventBridge rules that run your function on a schedule.Note: You can also create rules that react to events that take place in your AWS account.ResolutionNote: If you receive a Client error on launch error after completing the following steps, then see When I start my instance with encrypted volumes attached, the instance immediately stops with the error "client error on launch".Get the IDs of the EC2 instances that you want to stop and start. Then, follow these steps.Create an IAM policy and execution role for your Lambda function1.    Create an IAM policy using the JSON policy editor. Copy and paste the following JSON policy document into the policy editor:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "ec2:Start*", "ec2:Stop*" ], "Resource": "*" } ]}2.    Create an IAM role for Lambda.Important: When attaching a permissions policy to Lambda, make sure that you choose the IAM policy that you just created.Create Lambda functions that stop and start your EC2 instances1.    Open the Lambda console, and then choose Create function.2.    Choose Author from scratch.3.    Under Basic information, add the following information:For Function name, enter a name that identifies it as the function that's used to stop your EC2 instances. For example, "StopEC2Instances".For Runtime, choose Python 3.9.Under Permissions, expand Change default execution role.Under Execution role, choose Use an existing role.Under Existing role, choose the IAM role that you created.4.    Choose Create function.5.    Under Code, Code source, copy and paste the following code into the editor pane in the code editor: (lambda_function). This code stops the EC2 instances that you identify.Example function code to stop EC2 instancesimport boto3region = 'us-west-1'instances = ['i-12345cb6de4f78g9h', 'i-08ce9b2d7eccf6d26']ec2 = boto3.client('ec2', region_name=region)def lambda_handler(event, context): ec2.stop_instances(InstanceIds=instances) print('stopped your instances: ' + str(instances))Important: For region, replace "us-west-1" with the AWS Region that your instances are in. For instances, replace the example EC2 instance IDs with the IDs of the specific instances that you want to stop and start.6.    Choose Deploy.7.    On the Configuration tab, choose General configuration, Edit. Set Timeout to 10 seconds, and then choose Save.Note: Configure the Lambda function settings as needed for your use case. For example, to stop and start multiple instances, you might use a different value for Timeout and Memory.8.    Repeat steps 1-7 to create another function. Complete the following steps differently so that this function starts your EC2 instances:In step 3, enter a different Function name than the one that you used before. For example, "StartEC2Instances".In step 5, copy and paste the following code into the editor pane in the code editor: ( lambda_function).Example function code to start EC2 instancesimport boto3region = 'us-west-1'instances = ['i-12345cb6de4f78g9h', 'i-08ce9b2d7eccf6d26']ec2 = boto3.client('ec2', region_name=region)def lambda_handler(event, context): ec2.start_instances(InstanceIds=instances) print('started your instances: ' + str(instances))Important: For region and instances, use the same values that you used for the code to stop your EC2 instances.Test your Lambda functions1.    Open the Lambda console, and then choose Functions.2.    Choose one of the functions that you created.3.    Choose the Code tab.4.    In the Code source section, choose Test.5.    In the Configure test event dialog box, choose Create new test event.6.    Enter an Event name. Then, choose Create.Note: Don't change the JSON code for the test event. The function doesn't use it.7.    Choose Test to run the function.8.    Repeat steps 1-7 for the other function that you created.Check the status of your EC2 instancesAWS Management ConsoleYou can check the status of your EC2 instances before and after testing to confirm that your functions work as expected.AWS CloudTrailYou can use CloudTrail to check for events to confirm that the Lambda function stopped or started the EC2 instance.1.    Open the CloudTrail console.2.    In the navigation pane, choose Event history.3.    Choose the Lookup attributes dropdown list, and then choose Event name.4.    In the search bar, enter StopInstances to review the results.5.     In the search bar, enter StartInstances to review the results.If there are no results, then the Lambda function didn't stop or start the EC2 instances.Create EventBridge rules that run your Lambda functions1.    Open the EventBridge console.2.    Select Create rule.3.    Enter a Name for your rule, such as "StopEC2Instances". (Optional) Enter a description for the rule in Description.4.    For Rule type, choose Schedule, and then choose Continue in EventBridge Scheduler.5.    For Schedule pattern, choose Recurring schedule. Complete one of the following steps:Under Schedule pattern, for Occurrence, choose Recurring schedule. Then complete one of the following steps:When Schedule type is Rate-based schedule, for Rate expression, enter a rate value and choose an interval of time in minutes, hours, or days.When Schedule type is Cron-based schedule, for Cron expression, enter an expression that tells Lambda when to stop your instance. For information on expression syntax, see Schedule expressions for rules.Note: Cron expressions are evaluated in UTC. Make sure that you adjust the expression for your preferred time zone.6.    In Select targets, choose Lambda function from the Target dropdown list.7.    For Function, choose the function that stops your EC2 instances.8.    Choose Skip to review and create, and then choose Create.9.    Repeat steps 1-8 to create a rule to start your EC2 instances. Complete the following steps differently:Enter a name for your rule, such as "StartEC2Instances".(Optional) In Description, enter a description for your rule, such as "Starts EC2 instances every morning at 7 AM."In step 5, for Cron expression, enter an expression that tells Lambda when to start your instances.In step 7, for Function, choose the function that starts your EC2 instances.Related informationTutorial: Schedule Lambda functions using EventBridgeEvents from AWS servicesAdding stop actions to Amazon CloudWatch alarmsScheduled Reserved InstancesFollow"
https://repost.aws/knowledge-center/start-stop-lambda-eventbridge
Why didn't my AD users sync to IAM Identity Center?
My Active Directory (AD) users didn't sync to AWS Identity and Access Management (IAM) Identity Center (successor to AWS Single Sign-On).
"My Active Directory (AD) users didn't sync to AWS Identity and Access Management (IAM) Identity Center (successor to AWS Single Sign-On).ResolutionAfter connecting your AWS Managed AD or self-managed AD to IAM Identity Center, users in the default "Domain Users" group won't sync to IAM Identity Center. This is because IAM Identity Center can't read AD primary groups and their memberships.To resolve this issue, create new groups in your Managed AD, assign users to the groups, and then sync the users to IAM Identity Center. Using new groups instead of the default "Domain Users" group allows group membership in the IAM Identity Center identity store.For more information, see Active Directory “Domain Users” group does not properly sync into IAM Identity Center.Related informationIAM Identity Center configurable AD syncConnect to a Microsoft AD directoryHow do I get started with using IAM Identity Center and access the AWS access portal?Follow"
https://repost.aws/knowledge-center/iam-identity-center-ad-sync
How do I extend my Linux file system after increasing my EBS volume on my EC2 instance?
"I increased the size of my Amazon Elastic Block Store (Amazon EBS) volume, but my file systems aren't using the full volume. How do I fix that?"
"I increased the size of my Amazon Elastic Block Store (Amazon EBS) volume, but my file systems aren't using the full volume. How do I fix that?ResolutionWhen a volume is expanded to a larger size, the file system must also be resized to take advantage of the larger volume size. You can resize a file system as soon as it's in the optimizing state.The following procedure extends an 8 GB ext4 file system to get full use of a 16 GB volume. The file system is on an Amazon Elastic Compute Cloud (Amazon EC2) instance running Ubuntu.1.    Create a snapshot of your volume before changing your volume or file system. For more information, see Create Amazon EBS Snapshots.2.    Use the df -h command to show the size and the percentage used by the file system.ubuntu@ip-172-31-32-114:~$ df -hFilesystem Size Used Avail Use% Mounted on/dev/xvda1 7.7G 7.7G 0 100% //dev/xvdf 7.9G 7.1G 370M 96% /home/ubuntu/testIn this example, the /dev/xvdf/ file system size is 7.9G and is 96% full.3.    Use the lsblk command to show the size of the xvdf volume.ubuntu@ip-172-31-32-114:~$ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTxvda 202:0 0 8G 0 disk└─xvda1 202:1 0 8G 0 part /xvdf 202:80 0 16G 0 disk /home/ubuntu/testIn this example, the size of the xvdf volume is 16G.4.    Connect to your instance using SSH. For more information, see Connect to your Linux instance.5.    If the volume has a partition containing a file system, that partition must be resized before the file system is expanded.6.    Use the resize2fs command to automatically extend the size of the /dev/xvdf/ file system to the full space on the volume.ubuntu@ip-172-31-32-114:~$ sudo resize2fs /dev/xvdfNote: In this example, the volume is using an ext4 file system. Depending on your file system, you might need to use a different utility. For more information, see Extend a Linux file system after resizing a volume.7.    Rerun the df –h command.ubuntu@ip-172-31-32-114:~$ df -hFilesystem Size Used Avail Use% Mounted on/dev/xvda1 7.7G 7.7G 0 100% //dev/xvdf 16G 7.1G 8.0G 48% /home/ubuntu/testThe /dev/xvdf/ file system is now 16G in size and only 48% full.Related informationView information about an Amazon EBS volumeMake an Amazon EBS volume available for use on LinuxExtend a Windows file system after resizing a volumeFollow"
https://repost.aws/knowledge-center/extend-linux-file-system
How do I understand the configurationItemDiff field in Amazon SNS ConfigurationItemChangeNotification notifications?
"I received a ConfigurationItemChangeNotification Amazon Simple Notification Service (Amazon SNS) notification. Why did I get this notification, and how do I interpret the information in the configurationItemDiff field?"
"I received a ConfigurationItemChangeNotification Amazon Simple Notification Service (Amazon SNS) notification. Why did I get this notification, and how do I interpret the information in the configurationItemDiff field?ResolutionAWS Config creates a configuration item whenever the configuration of a resource changes (create/update/delete). For a list of resources that AWS Config supports, see Supported resource types. AWS Config uses Amazon SNS to deliver a notification as the changes occur. The Amazon SNS notification payload includes fields to help you track the resource changes in a given AWS Region. For more information, see Example configuration item change notifications.To understand why you receive a ConfigurationItemChangeNotification notification, review the configurationItemDiff details. The fields vary depending on the change type and can form different combinations such as UPDATE-UPDATE, UPDATE-CREATE, and DELETE-DELETE. The following are explanations of some common combinations.UPDATE-CREATE and UPDATE-UPDATEThe following example includes changes in the resource direct relationships and resource configurations. The configurationItemDiff details reveal the following information:Action performed: A managed policy present in the account was attached to an AWS Identity and Access Management (IAM) role.Basic operation performed: UPDATE (updating the number of associations of the resource type AWS::IAM::Policy in an account).Change type combinations:Resource direct relationship change UPDATE-CREATE. A new attachment or association was created between an IAM policy and an IAM role.Resource configuration change UPDATE-UPDATE. The number IAM policy associations increased from 2 to 3 when the policy was attached to the IAM role.Example UPDATE-CREATE and UPDATE-UPDATE configurationItemDiff notification:{ "configurationItemDiff": { "changedProperties": { "Relationships.0": { "previousValue": null, "updatedValue": { "resourceId": "AROA6D3M4S53*********", "resourceName": "Test1", "resourceType": "AWS::IAM::Role", "name": "Is attached to Role" }, "changeType": "CREATE" >>>>>>>>>>>>>>>>>>>> 1 }, "Configuration.AttachmentCount": { "previousValue": 2, "updatedValue": 3, "changeType": "UPDATE" >>>>>>>>>>>>>>>>>>>> 2 } }, "changeType": "UPDATE" }}UPDATE-DELETEThe following example includes changes in the resource direct relationships. The configurationItemDiff details reveal the following information:Action performed: A managed policy present in the account was detached from an IAM user.Basic operation performed: UPDATE (updating the permissions policy associated with the resource type AWS::IAM::User).Change type combination: Resource direct relationship change UPDATE-DELETE. The association between an IAM user and an IAM policy in an account was deleted.Example UPDATE-DELETE configurationItemDiff notification:{ "configurationItemDiff": { "changedProperties": { "Configuration.UserPolicyList.0": { "previousValue": { "policyName": "Test2", "policyDocument": "{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "ec2:RunInstances", "Resource": "arn:aws:ec2:*:*:instance/*", "Condition": { "StringLike": { "aws:RequestTag/VPCId": "*" } } } ]}" }, "updatedValue": null, "changeType": "DELETE" >>>>>>>>>>>>>>>>>>>> 3 } }, "changeType": "UPDATE" }}DELETE-DELETEThe following example includes changes in the resource direct relationships and resource configurations. The configurationItemDiff details reveal the following information:Action performed: An IAM role present in an account was deleted.Basic operation performed: DELETE (a resource of the resource type AWS::IAM::Role was deleted).Change type combination: Resource direct relationship change and resource configuration change DELETE-DELETE. The deletion of the IAM role also deleted the association of the IAM policy with the IAM role.Example DELETE-DELETE configurationItemDiff notification:{ "configurationItemDiff": { "changedProperties": { "Relationships.0": { "previousValue": { "resourceId": "ANPAIJ5MXUKK*********", "resourceName": "AWSCloudTrailAccessPolicy", "resourceType": "AWS::IAM::Policy", "name": "Is attached to CustomerManagedPolicy" }, "updatedValue": null, "changeType": "DELETE" }, "Configuration": { "previousValue": { "path": "/", "roleName": "CloudTrailRole", "roleId": "AROAJITJ6YGM*********", "arn": "arn:aws:iam::123456789012:role/CloudTrailRole", "createDate": "2017-12-06T10:27:51.000Z", "assumeRolePolicyDocument": "{"Version":"2012-10-17","Statement":[{"Sid":"","Effect":"Allow","Principal":{"AWS":"arn:aws:iam::123456789012:root"},"Action":"sts:AssumeRole","Condition":{"StringEquals":{"sts:ExternalId":"123456"}}}]}", "instanceProfileList": [], "rolePolicyList": [], "attachedManagedPolicies": [ { "policyName": "AWSCloudTrailAccessPolicy", "policyArn": "arn:aws:iam::123456789012:policy/AWSCloudTrailAccessPolicy" } ], "permissionsBoundary": null, "tags": [], "roleLastUsed": null }, "updatedValue": null, "changeType": "DELETE" } }, "changeType": "DELETE" }Related informationNotifications that AWS Config sends to an Amazon SNS topicFollow"
https://repost.aws/knowledge-center/config-configurationitemdiff
How do I use CloudFront to serve HTTPS requests for my Amazon S3 bucket?
I want to configure an Amazon CloudFront distribution to serve HTTPS requests for my Amazon Simple Storage Service (Amazon S3).
"I want to configure an Amazon CloudFront distribution to serve HTTPS requests for my Amazon Simple Storage Service (Amazon S3).ResolutionOpen the CloudFront console.Choose Create Distribution.Under Origin, for Origin domain, choose your S3 bucket's REST API endpoint from the dropdown list. Or, enter your S3 bucket's website endpoint. For more information, see Key differences between a website endpoint and a REST API endpoint.Under Default cache behavior, Viewer, for Viewer Protocol Policy, select HTTP and HTTPS or Redirect HTTP to HTTPS.Note: Choosing HTTPS Only blocks all HTTP requests.If you're not using an Alternate domain name (CNAME) with CloudFront, then choose Create Distribution to complete the process. If you are using a CNAME, then follow these additional steps before you create the distribution:For Alternate Domain Names (CNAMEs), choose Add item, and then enter your alternate domain name.For Custom SSL Certificate, choose the custom SSL certificate from the dropdown list that covers your CNAME to assign it to the distribution.Note: For more information on installing a certificate, see How do I configure my CloudFront distribution to use an SSL/TLS certificate?Choose Create distribution.Note: After you choose Create distribution, 20 or more minutes can elapse for your distribution to be deployed.For information on using your distribution with Amazon S3, see Using an Amazon S3 bucket. When you use the Amazon S3 static website endpoint, connections between CloudFront and Amazon S3 are available only over HTTP. For information on HTTPS connections between CloudFront and Amazon S3, see How do I use CloudFront to serve a static website hosted on Amazon S3?Be sure to update the DNS for your domain to a CNAME record that points to the CloudFront distribution's provided domain. You can find your distribution's domain name in the CloudFront console.If you're using Amazon Route 53 as your DNS provider, then see Configuring Amazon Route 53 to route traffic to a CloudFront distribution. If you're using another DNS provider, then you can create a CNAME record (www.example.com CNAME d111111abcdef8.cloudfront.net) to point to the distribution's domain.Important: DNS standards require that an apex domain (example.com) use an authoritative (A) record that maps to an IP address. You can point your apex domain to your CloudFront distribution only if you're using Route 53. If you're using another DNS provider, then you must use a subdomain (www.example.com).For additional troubleshooting based on your endpoint type, see the following:I'm using an S3 REST API endpoint as the origin of my CloudFront distribution. Why am I getting 403 Access Denied errors?I'm using an S3 website endpoint as the origin of my CloudFront distribution. Why am I getting 403 Access Denied errors?Related informationAmazon CloudFront pricingRequiring HTTPS for communication between CloudFront and your Amazon S3 originWebsite endpointsCreate a CloudFront distributionFollow"
https://repost.aws/knowledge-center/cloudfront-https-requests-s3
How do I send email using Lambda and Amazon SES?
I want to use AWS Lambda to send email using Amazon Simple Email Service (Amazon SES).
"I want to use AWS Lambda to send email using Amazon Simple Email Service (Amazon SES).Short descriptionTo send email from a Lambda function using Amazon SES, complete these steps:1.    Create an AWS Identity and Access Management (IAM) policy and execution role for Lambda to run the API call.2.    Verify your Amazon SES identity (domain or email address).3.    Create or update a Lambda function that includes logic for sending email through Amazon SES.Note: To include a PDF attachment in your emails, you must use the Amazon SES SendRawEmail API operation. For more information, see Sending raw email using the Amazon SES API on GitHub.ResolutionNote: The Node.js, Python, and Ruby Lambda function code examples in this article requires modification depending on your use case. Adapt the example to your use case, or design your own in your preferred programming language.Create an IAM policy and execution role for Lambda to run the API call1.    Create an IAM policy using the JSON policy editor. When you create the policy, paste the following JSON policy document into the policy editor:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ses:SendEmail", "ses:SendRawEmail" ], "Resource": "*" } ]}Note: For more information and examples of how to restrict access to this policy, see Example IAM policies for Amazon SES.2.    Attach the IAM policy to an IAM role. For instructions, see the To use a managed policy as a permissions policy for an identity (console) section in Adding IAM identity permissions (console).Note: You'll assign this IAM role to your Lambda function in the following steps.Verify your Amazon SES identity (domain or email address)To verify a domain, see Verifying a DKIM domain identity with your DNS provider.To verify an email address, see Verifying an email address identity.Create or update a Lambda function that includes logic for sending email through Amazon SES1.    If you haven't done so already, create a Lambda function.Note: You can create a Lambda function by using the Lambda console or by building and uploading a deployment package.2.    In the Lambda console, in the left navigation pane, choose Functions.3.    Choose the name of your function.4.    On the Configuration tab, in the Permissions pane, look at the function's Execution Role. Verify that the IAM role with Amazon SES permissions that you created earlier is listed. If the correct IAM role isn't listed, then assign the correct role to the function.5.    Under Function code, in the editor pane, paste one of the following function code examples. Be sure to use the relevant example for your runtime and version of Node.js, Python, or Ruby.Important: Replace us-west-2 with the AWS Region that your verified Amazon SES identity is in. Replace "RecipientEmailAddress", ... with the email address or addresses that you want to send the email to. Replace SourceEmailAddress with your Amazon SES-verified sender email address, or any email address from an Amazon SES-verified domain. Optionally, edit the message body ("Test") and subject line ("Test Email").For Node.js versions 18 and newer, see the following example code:// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.// SPDX-License-Identifier: Apache-2.0import { SESClient, SendEmailCommand } from "@aws-sdk/client-ses";const ses = new SESClient({ region: "us-west-2" });export const handler = async(event) => { const command = new SendEmailCommand({ Destination: { ToAddresses: ["RecipientEmailAddress", ...], }, Message: { Body: { Text: { Data: "Test" }, }, Subject: { Data: "Test Email" }, }, Source: "SourceEmailAddress", }); try { let response = await ses.send(command); // process data. return response; } catch (error) { // error handling. } finally { // finally. }};For Node.js versions 16 and older, see the following example code::// Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.// SPDX-License-Identifier: Apache-2.0var aws = require("aws-sdk");var ses = new aws.SES({ region: "us-west-2" });exports.handler = async function (event) { var params = { Destination: { ToAddresses: ["RecipientEmailAddress", ...], }, Message: { Body: { Text: { Data: "Test" }, }, Subject: { Data: "Test Email" }, }, Source: "SourceEmailAddress", }; return ses.sendEmail(params).promise()};For Python version 3.9, see the following example code:import jsonimport boto3client = boto3.client('ses', region_name='us-west-2')def lambda_handler(event, context): response = client.send_email( Destination={ 'ToAddresses': ['RecipientEmailAddress'] }, Message={ 'Body': { 'Text': { 'Charset': 'UTF-8', 'Data': 'This is the message body in text format.', } }, 'Subject': { 'Charset': 'UTF-8', 'Data': 'Test email', }, }, Source='SourceEmailAddress' ) print(response) return { 'statusCode': 200, 'body': json.dumps("Email Sent Successfully. MessageId is: " + response['MessageId']) }For Ruby version 2.7, see the following example code:require "aws-sdk-ses"$ses = Aws::SES::Client.new(region: "us-west-2")def lambda_handler(event:, context:) resp = $ses.send_email({ destination: { to_addresses: ["RecipientEmailAddress"], }, message: { body: { text: { charset: "UTF-8", data: "This is the message body in text format.", }, }, subject: { charset: "UTF-8", data: "Test email", }, }, source: "SourceEmailAddress"}) { statusCode: 200, body: JSON.generate("Message Sent Successfully. #{resp.to_h} ") }endFor more information about using the sendEmail API, see the AWS SDK documentation for JavaScript, Python, Ruby, and Java V2.6.    Choose Deploy.(Optional) Send a test email1.    In the Lambda console, configure a test event for your function.Note: The test payload is required but isn't used for this code example.2.    Choose Test. Lambda uses Amazon SES to send the test email to your recipient.Related informationSending email using Amazon SESIdentity and access management in Amazon SESFollow"
https://repost.aws/knowledge-center/lambda-send-email-ses
How do I copy an Amazon Redshift provisioned cluster to a different AWS account?
I want to copy an Amazon Redshift provisioned cluster from one AWS account to another. How can I do that?
"I want to copy an Amazon Redshift provisioned cluster from one AWS account to another. How can I do that?Short descriptionBefore you begin, consider the following requirements:New clusters will have a different Domain Name System (DNS) endpoint. This means that you must update all clients, application codes, and Amazon Kinesis Data Firehose delivery streams to refer to the new endpoint.Amazon Simple Storage Service (Amazon S3) log settings aren't migrated. You must activate database audit logging on the new cluster.Historic information that is stored in STL and SVL tables aren't retained in the new cluster.ResolutionTo copy an Amazon Redshift provisioned cluster to another AWS account, follow these steps:1.    Create a manual snapshot of the cluster that you want to migrate.2.    Share a cluster snapshot with another AWS account.3.    In the destination AWS account, restore the cluster from a snapshot.Related informationAmazon Redshift snapshotsHow do I move my Amazon Redshift cluster from one VPC to another VPC?Managing snapshots using the AWS CLI and Amazon Redshift APIDNS attributes for your VPCFollow"
https://repost.aws/knowledge-center/account-transfer-redshift
How can I optimize file transfer performance over Direct Connect?
I'm experiencing slow file transfer speeds over my AWS Direct Connect connection.
"I'm experiencing slow file transfer speeds over my AWS Direct Connect connection.ResolutionUse the following troubleshooting steps for your use case.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Use Amazon CloudWatch metrics to check for Direct Connect connection over utilization and errorsYou can use CloudWatch metrics to monitor Direct Connect connections and virtual interfaces. For Direct Connect dedicated connections, check the ConnectionBpsEgress and ConnectionBpsIngress metrics for values that exceed network port speeds. Check the ConnectionErrorCount metric for MAC level errors. For more information on troubleshooting MAC level errors, see the ConnectionErrorCount section in Direct Connect connection metrics.For hosted connections, review the VirtualInterfaceBpsEgress and VirtualInterfaceBpsIngress metrics. You can only create one Direct Connect virtual interface for each hosted connection. These metrics are an estimate of the total bitrate of network traffic for the hosted connection.For more information, see Viewing Direct Connect CloudWatch metrics.Optimizing performance when uploading large files to Amazon Simple Storage Service (Amazon S3)For uploading large files to Amazon S3, it's a best practice to leverage multipart uploads. If you're using the AWS CLI, all high-level Amazon S3 commands like cp and sync automatically perform multipart uploads for large files.Use the following AWS CLI Amazon S3 configuration values:max_concurrent_requests - The maximum number of concurrent requests. The default value is 10. Make sure that you have enough resources to support the maximum number of requests.max_queue_size - The maximum number of tasks in the task queue.multipart_threshold - The size threshold the CLI uses for multipart transfers of individual files.multipart_chunksize - When using multipart transfers, this is the chunk size that the CLI uses for multipart transfers of individual files. This value sets the size of each part that the AWS CLI uploads in a multipart upload for an individual file. This setting allows you to break down a larger file (for example, 300 MB) into smaller parts for quicker upload speeds. The default value is 8MB while the minimum value you can set is 5MB.Note: A multipart upload requires that a single file uploaded in a maximum of 10,000 parts. Be sure that the chunksize that you set balances the file size and the number of parts.max_bandwidth - The maximum bandwidth that will be consumed for uploading and downloading data to and from Amazon S3.For more information, see Migrate small sets of data from on premises to Amazon S3 using AWS SFTP.Performance tuning for Server Message Block (SMB) Windows file serversTo optimize network performance for Windows SMB file servers, the Server Message Block (SMB) 3.0 protocol must be negotiated between each client and file server. This is because SMB 3.0 uses protocol improves performance for SMB file servers including the following features:SMB Direct - This feature ensures SMB detects RDMA network interfaces on the files server and automatically uses Remote Direct Memory Access (RDMA). RDMA increases throughput, provides low latency, and low CPU utilization.SMB Multichannel - This feature allows file servers to use multiple network connections simultaneously and provides increased throughput.SMB Scale-Out - This feature allows SMB 3.0 in cluster configurations to show a share in all nodes of a cluster in an active/active configuration. This ensures the maximum share bandwidth is the total bandwidth of all file server cluster nodes.For SMB clients, use the robocopy multithreaded feature to copy files and folders to the file server over multiple parallel connections.You can also use Explicit Congestion Notification (ECN) and Large Send Offload (LSO) to reduce throughput.Check for packet loss on the Direct Connect connectionPacket loss occurs when transmitted data packets fail to arrive at their destination resulting in network performance issues. Packet loss is caused by low signal strength at the destination, excessive system utilization, network congestion and network route misconfigurations.For more information, seeHow can I troubleshoot packet loss for my Direct Connect connection?Isolate and diagnose network and application performance issuesYou can use utilities such as iPerf3, tcpdump, and Wireshark to troubleshoot Direct Connect performance issues and analyze network results. Take note of the following settings that affect network throughput on a single TCP stream:Receiver Window Size (RWS) - This indicates the maximum number of bytes the receiver can accept without overflowing buffers.The senders send buffers - This may limit the maximum number of bytes that the receiver can acknowledged. The sender can't discard unacknowledged bytes until it receives acknowledgment. Unacknowledged bytes may have to be retransmitted after a timeout period.The senders MSS (Maximum Segment Size) - The maximum number of bytes a TCP segment can have as a payload. The smaller the MSS, the less the network throughput.The Round Trip Time (RTT) - The higher the RTT is between the sender and receiver, the lower the available network bandwidth.Tip: It's a best practice for the sender to initiate several parallel connections to the receiver during file transfers.For more information, see How can I troubleshoot Direct Connect network performance issues?Related informationAWS Direct Connect featuresBest practices for configuring network interfacesFollow"
https://repost.aws/knowledge-center/direct-connect-file-transfer
How do I monitor my transit gateway and Site-to-Site VPN on a transit gateway using Network Manager?
I want to monitor my transit gateway and my Site-to-Site VPN on transit gateway. How do I do this using AWS Network Manager?
"I want to monitor my transit gateway and my Site-to-Site VPN on transit gateway. How do I do this using AWS Network Manager?ResolutionBefore you can monitor your transit gateway and your Site-to-Site VPN on a transit gateway using AWS Network Manager, you must have already done the following:Created a global networkRegistered your transit gateway to the global network.Important: If the transit gateway isn't located in the same AWS account as the global account, then you must turn on multi-account support.When the transit gateway is registered to the global network, you see metrics in the Monitoring tab. The Monitoring tab is where you can view transit gateway metrics. For additional information on visualizing and monitoring your transit gateways, see Visualize transit gateways.To monitor your Site-to-Site VPN on transit gateway using Network ManagerFirst, be sure that you have created a Site-to-Site VPN connection on your transit gateway, then do the following:Create a site to represent the physical location of your network.Create a link to represent an internet connection from a device for your new site.Create a device to represent a physical or virtual appliance for your new site.In the newly created device, choose Overview, and then choose Associate Site to associate the newly created site.Associate the customer gateway with your new device.Monitoring optionsTo view the transit gateways VPN status, do the following:Open the Network Manager console.In the navigation pane, choose Global networks.Select your global network ID.Choose Transit gateways.There are three VPN statuses:Down – The percentage of your total transit gateway VPNs that are down.Impaired VPN – The percentage of your total VPNs that are impaired.Up VPN – The percentage of your total VPNs that are up.To see the status of your tunnels, choose Devices, and then choose the VPNs tab. You can also see Amazon CloudWatch metrics for Bytes in and Bytes out for your VPN and tunnel down count in the Monitoring tab.To view events for your IPsec VPN tunnels in the global network, first choose the Transit gateways tab. Then, select the transit gateway where you created the VPN. For more information, see Status update events.You can check the event details in the Amazon CloudWatch console under Logs Insights.To check the event details, choose /aws/events/networkmanagerloggroup in the US West (Oregon) AWS Region and then run the following command:Note: Replace global network ARN with the ARN for your global network and transit gateway ARN with the ARN that you have the Site-to-Site VPN. Replace event name with one of the following events for Site-to-Site VPN:IPsec for a VPN connection has come up.IPsec for a VPN connection has gone downBGP for a VPN connection has been established.BGP for a VPN connection has gone down.Routes in one or more Transit Gateway route tables have been installed.Routes in one or more Transit Gateway route tables have been uninstalled.fields detail.region as Region, detail.changeDescription as Message, resources.1 as Resource, @timestamp as Timestamp | filter resources.0 = "global network ARN” and resources.1 not like 'core-network-' and detail.transitGatewayArn= “transit gateway ARN” and detail.changeDescription= “event name” | sort @timestamp desc | limit 200Note: This command works only if you already onboarded to CloudWatch Logs Insights. For more information, see Monitoring your global network with CloudWatch Events.Follow"
https://repost.aws/knowledge-center/network-manager-monitor-transit-gateway
How can I configure access to the EC2 Serial Console of an unreachable or inaccessible Linux instance?
My Amazon Elastic Compute Cloud (Amazon EC2) Linux instance is unreachable or inaccessible. I didn't configure access to the EC2 Serial Console at the OS-level. How can I modify the OS configurations for the EC2 Serial Console and set the password for any OS user to access the instance?
"My Amazon Elastic Compute Cloud (Amazon EC2) Linux instance is unreachable or inaccessible. I didn't configure access to the EC2 Serial Console at the OS-level. How can I modify the OS configurations for the EC2 Serial Console and set the password for any OS user to access the instance?Short descriptionTo configure access to the serial console, do the following:1.    Access the instance's root volume.2.    Set the password for the root user or any other OS user.3.    Check and update the GRUB settings for the Serial Console.Note: You can skip step 3 if the EC2 Serial Console is working properly on the affected instance and you just need to set the password for your OS user.PrerequisitesTo use the serial console, make sure that you met the prerequisites, except for Set an OS user password. Setting a password is discussed in the following resolution.ResolutionAccess the instance's root volume using a rescue instanceCreate a temporary rescue instance, and then remount your Amazon Elastic Block Store (Amazon EBS) volume on the rescue instance. From the rescue instance, you can check and modify the GRUB settings for the serial console. You can also set the password for the root user or any other OS user.Important: Don't perform this procedure on an instance store-backed instance. Because the recovery procedure requires a stop and start of your instance, any data on that instance is lost. For more information, see Determine the root device type of your instance.1.    Create an EBS snapshot of the root volume. For more information, see Create Amazon EBS snapshots.2.    Open the Amazon EC2 console.Note: Be sure that you're in the correct Region.3.    Select Instances from the navigation pane, and then choose the impaired instance.4.    Choose Instance State, Stop instance, and then select Stop.5.    In the Storage tab, under Block devices, select the Volume ID for /dev/sda1 or /dev/xvda.Note: The root device differs by AMI, but /dev/xvda or /dev/sda1 are reserved for the root device. For example, Amazon Linux 1 and 2 use /dev/xvda. Other distributions, such as Ubuntu 16, 18, CentOS 7, and RHEL 7.5, use /dev/sda1.6.    Choose Actions, Detach Volume, and then select Yes, Detach. Note the Availability Zone.Note: You can tag the EBS volume before detaching it to help identify it in later steps.7.    Launch a rescue EC2 instance in the same Availability Zone.Note: Depending on the product code, you might be required to launch an EC2 instance of the same OS type. For example, if the impaired EC2 instance is a paid RHEL AMI, you must launch an AMI with the same product code. For more information, see Get the product code for your instance.If the original instance is running SELinux (RHEL, CentOS 7 or 8, for example), launch the rescue instance from an AMI that uses SELinux. If you select an AMI running a different OS, such as Amazon Linux 2, any modified file on the original instance has broken SELinux labels.8.    After the rescue instance launches, choose Volumes from the navigation pane, and then choose the detached root volume of the impaired instance.9.    Choose Actions, Attach Volume.10.    Choose the rescue instance ID ( id-xxxxx), and then set an unused device. In this example, /dev/sdf.11.     Use SSH to connect to the rescue instance.12.    Run the lsblk command to view your available disk devices:lsblkThe following is an example of the output:NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTxvda 202:0 0 15G 0 disk└─xvda1 202:1 0 15G 0 part /xvdf 202:0 0 15G 0 disk └─xvdf1 202:1 0 15G 0 partNote: Nitro-based instances expose EBS volumes as NVMe block devices. The output generated by the lsblk command on Nitro-based instances shows the disk names as nvme[0-26]n1. For more information, see Amazon EBS and NVMe on Linux instances. The following is an example of the lsblk command output on a Nitro-based instance:NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTnvme0n1 259:0 0 8G 0 disk └─nvme0n1p1 259:1 0 8G 0 part /└─nvme0n1p128 259:2 0 1M 0 part nvme1n1 259:3 0 100G 0 disk └─nvme1n1p1 259:4 0 100G 0 part /13.    Run the following command to become root:sudo -i14.    Mount the root partition of the mounted volume to /mnt. In the preceding example, /dev/xvdf1 or /dev/nvme2n1p2 is the root partition of the mounted volume. For more information, see Make an Amazon EBS volume available for use on Linux.Note: In the following example, replace /dev/xvdf1 with the correct root partition for your volume.mount -o nouuid /dev/xvdf1 /mntNote: If /mnt doesn't exist on your configuration, create a mount directory, and then mount the root partition of the mounted volume to this new directory.mkdir /mntmount -o nouuid /dev/xvdf1 /mntYou can now access the data of the impaired instance through the mount directory.15.    Mount /dev, /run, /proc, and /sys of the rescue instance to the same paths as the newly mounted volume:for m in dev proc run sys; do mount -o bind {,/mnt}/$m; doneCall the chroot function to change into the mount directory.Note: If you have a separate /boot and /etc partitions, mount them to /mnt/boot and /mnt/etc before running the following command.chroot /mntSet the password for the root user or any other OS user.Use the passwd command to set the password for your OS user. In the following example, the user is root:passwd rootCheck and update the GRUB settings for the serial console.Note: You can skip this step if the EC2 Serial console is working properly on the affected instance and you just need to set the password for your OS user.The supported serial console port for Linux is ttyS0. If the screen remains black without providing any output when connecting to the EC2 Serial Console, you should make sure that the console entry is properly configured in the GRUB settings. The examples provided below are taken from the AWS Marketplace AMIs for the different distributions in which the serial console is working properly:GRUB2 for Amazon Linux 2, RHEL , and CentOS 71.    Verify that the console entry for the ttyS0 is properly set in the GRUB_CMDLINE_LINUX_DEFAULT line of the /etc/default/grub file:Amazon Linux 2GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0,115200n8 net.ifnames=0 biosdevname=0 nvme_core.io_timeout=4294967295 rd.emergency=poweroff rd.shell=0"RHEL 7GRUB_CMDLINE_LINUX="console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau crashkernel=auto"CentOS 7GRUB_CMDLINE_LINUX="console=tty0 crashkernel=auto net.ifnames=0 console=ttyS0"2.    If the console entry for the ttyS0 is not set, add it in the GRUB_CMDLINE_LINUX_DEFAULT line. Then update GRUB to regenerate the /boot/grub2/grub.cfg file:grub2-mkconfig -o /boot/grub2/grub.cfgGRUB1 (Legacy GRUB) for Red Hat 6 and Amazon Linux 11.    Verify that the console entry for the ttyS0 is properly set in the kernel line of the /boot/grub/grub.conf file:Amazon Linux 1kernel /boot/vmlinuz-4.14.252-131.483.amzn1.x86_64 root=LABEL=/ console=tty1 console=ttyS0 selinux=0 nvme_core.io_timeout=4294967295Red Hat 6kernel /boot/vmlinuz-2.6.32-573.el6.x86_64 console=ttyS0 console=ttyS0,115200n8 ro root=UUID=0e6b1614-7bbe-4d6e-bc78-a5556a123ba8 rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 xen_blkfront.sda_is_xvda=1 console=tty0 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_LVM rd_NO_DM2.    If the console entry for the ttyS0 is not set, add it in the /boot/grub/grub.conf file according to the preceding examples.GRUB2 for Ubuntu 16.04, 18.04 and 20.041.    Verify that the console entry for the ttyS0 is properly set in the GRUB_CMDLINE_LINUX_DEFAULT line of the /etc/default/grub.d/50-cloudimg-settings.cfg file:GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0 nvme_core.io_timeout=4294967295"2.    If the console entry console=ttyS0 is not present**,** add it in the GRUB_CMDLINE_LINUX_DEFAULT line. Then update the GRUB configuration using the following command:update-grubGRUB2 for RHEL 8 and CentOS 81.    Run the grubby --default-kernel command to see the current default kernel:grubby --default-kernel2.    Run the grubby --info=ALL command to see all the available kernels, their indexes, and their args:grubby --info=ALL3.    Verify that the console entry for the ttyS0 is properly set in the args line for the default kernel outlined in the step 1.RHEL 8index=0kernel="/boot/vmlinuz-4.18.0-305.el8.x86_64"args="ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto $tuned_params"root="UUID=d35fe619-1d06-4ace-9fe3-169baad3e421"initrd="/boot/initramfs-4.18.0-305.el8.x86_64.img $tuned_initrd"title="Red Hat Enterprise Linux (4.18.0-305.el8.x86_64) 8.4 (Ootpa)"id="0c75beb2b6ca4d78b335e92f0002b619-4.18.0-305.el8.x86_64"CentOS 8index=2kernel="/boot/vmlinuz-4.18.0-193.19.1.el8_2.x86_64"args="ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 nvme_core.io_timeout=4294967295 nvme_core.max_retries=10 crashkernel=auto $tuned_params"root="UUID=b437cbaa-8fe5-49e4-8537-0895c219037a"initrd="/boot/initramfs-4.18.0-193.19.1.el8_2.x86_64.img $tuned_initrd"title="CentOS Linux (4.18.0-193.19.1.el8_2.x86_64) 8 (Core)"id="dc49529e359897df0b9664481b009b1f-4.18.0-193.19.1.el8_2.x86_64"4.    If the console entry for the ttyS0 isn't set, then use the following grubby command to append it to the args of the default kernel:grubby --args console=ttyS0,115200n8 --update-kernel DEFAULTUnmount and detach the root volume from the rescue instance, and then attach the volume to the impaired instance1.    Exit from chroot, and unmount /dev, /run, /proc, and /sys:exitumount /mnt/{dev,proc,run,sys,}2.    From the Amazon EC2 console, choose Instances, and then choose the rescue instance.3.    Choose Instance State, Stop instance, and then select Yes, Stop.4.    Detach the root volume id-xxxxx (the volume from the impaired instance) from the rescue instance.5.    Attach the root volume you detached in step 4 to the impaired instance as the root volume (/dev/sda1), and then start the instance.Note: The root device differs by AMI. The names /dev/xvda or /dev/sda1 are reserved for the root device. For example, Amazon Linux 1 and 2 use /dev/xvda. Other distributions, such as Ubuntu 16, 18, CentOS 7, and RHEL 7.5, use /dev/sda1.Now you can access the OS of the impaired instance through the EC2 Serial Console using the password defined in the previous step for the root or any other OS user.Related informationConfigure GRUBFollow"
https://repost.aws/knowledge-center/ec2-serial-console-access-unreachable-instance
What are best practices for securing my Linux server running on Lightsail?
I'm a system administrator for Amazon Lightsail instances running Linux. What are some server security best practices I can use to help protect my data?
"I'm a system administrator for Amazon Lightsail instances running Linux. What are some server security best practices I can use to help protect my data?ResolutionThe following are basic Linux server security best practices. While these are important considerations for Linux server security, keep in mind that this isn't a complete list. There are many complex settings that should be configured and addressed by your local system administrator team based on your specific requirements and use case.Encrypt data communication to and from your Linux server.Use SCP, SSH, rsync, or SFTP for file transfers. Avoid using services such as FTP, Telnet, and so on, as these aren't secure. To maintain a secure (HTTPS) connection, install and configure an SSL certificate on your server.Minimize software to minimize vulnerability in Linux and perform security audits on a regular basis.Don't install unnecessary software so that you avoid introducing vulnerabilities from software or packages. If possible, identify and remove all unwanted packages.Keep the Linux kernel and software up to date.Applying security patches is an important part of maintaining your Linux server. Linux provides all of the necessary tools to keep your system updated. Linux also allows for easy upgrades between versions. Review and apply all security updates as soon as possible and make sure that you update to the latest available kernel. Use the respective package managers based on your Linux distributions, such as yum, apt-get, or dpkg to apply all security updates.Use Linux security extensions.Linux comes with various security features that you can use to guard against misconfigured or compromised programs. If possible, use SELinux and other Linux security extensions to enforce limitations on network and other programs. For example, SELinux provides a variety of security policies for the Linux kernel.Disable the root login.It's a best practice not to log in as a root user. You should use sudo to run root level commands when required. Sudo greatly enhances the security of the system without sharing the credentials with other users and administrators.Find listening network ports using SS or netstat and close or restrict all other ports.It's important to pay attention to which ports are listening on the system's network interfaces. This can be done through ss or netstat. Any open ports might be evidence of an intrusion.Configure both Lightsail firewall and OS-level firewalls on Linux servers for an additional level of security.Use Lightsail firewall to filter out traffic and allow only necessary traffic to your server. OS-level firewall is a user space application program that allows you to configure the firewalls provided by the Linux kernel. You can use iptables, ufw, firewalld, and so on, depending on your Linux distribution.Use auditd for system accounting.Linux provides auditd for system auditing. Auditd writes audit records to the disk. It also monitors various system activites, such as system logins, authentications, account modifications, and SELinux denials. These records help administrators identify malicious activity or unauthorized access.Install an intrusion detection system (IDS).Use fail2ban or denyhost as an IDS. Fail2ban and denyhost scan the log files for too many failed login attempts and block the IP address that's showing signs of malicious activity.Create backups on a regular basis.For more information, see Snapshots in Amazon Lightsail.Avoid providing read, write, and run Permissions (777) for files and directories to users, groups, and others.You can use chmod to restrict access to files and directories, such as the web-root directory, document-root, and so on. Edit the permissions to provide access to authorized users only.Related informationSecurity in Amazon LightsailCompliance Validation for Amazon LightsailInfrastructure Security in Amazon LightsailBest practices for securing Windows Server-based Lightsail instancesFollow"
https://repost.aws/knowledge-center/lightsail-secure-linux-server
How do I view web interfaces that are hosted on Amazon EMR clusters?
I want to access the application user interfaces (UIs) that are on my Amazon EMR cluster.
"I want to access the application user interfaces (UIs) that are on my Amazon EMR cluster.ResolutionApplications that you install on an Amazon EMR cluster, such as Apache Spark and Apache Hadoop, publish web UIs on the master node. You have several options for accessing these user interfaces, depending on your level of technical expertise. For more information about the options, see View web interfaces hosted on Amazon EMR clusters.Follow"
https://repost.aws/knowledge-center/view-emr-web-interfaces
How can I get my Amazon SNS topic to receive Amazon RDS notifications?
I want my Amazon Simple Notification Service (Amazon SNS) topic to receive Amazon Relational Database Service (Amazon RDS) notifications.
"I want my Amazon Simple Notification Service (Amazon SNS) topic to receive Amazon Relational Database Service (Amazon RDS) notifications.ResolutionCheck if your SNS topic is encrypted1.    Open the Amazon SNS console.2.    On the navigation panel, choose Topics, and then choose the topic that you want to receive an RDS notification.3.    Choose the Encryption tab.If you see Configured in the Encryption section, then your topic is encrypted. You also see your AWS KMS key (KMS key) and KMS ARN.If your topic is encrypted, grant Amazon RDS the necessary permissions to access the AWS KMS key. For more information, see Turn on compatibility between event sources from AWS services and encrypted topics.Note: For encrypted topics to receive Amazon RDS notifications, you must use an AWS KMS key to encrypt the SNS topic. You must modify the AWS KMS key policy to add the permissions for the operations: kms:GenerateDataKey* and kms:Decrypt.If your topic isn't encrypted, continue to the Validate the access policy of your SNS topic section of this article.Validate the access policy of your SNS topicYour SNS access policy must have permissions to allow Amazon RDS to publish events to your SNS topic.1.    Open the Amazon SNS console.2.    On the navigation panel, choose Topics, and then choose the topic that you want to receive an RDS notification.3.    Choose the Access policy tab.If your SNS access policy doesn't allow Amazon RDS to publish events to your SNS topic, then complete the following steps to update your policy:1.    In the Details section of your topic page, choose Edit.2.    Expand the Access policy section, and then copy and paste the preceding policy into the JSON editor.3.    Choose Save changes.{ "Version": "2012-10-17", "Id": "SNSAccessPolicy", "Statement": [ { "Sid": "PolicyForRDSToSNS", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "SNS:Publish", "Resource": "your-SNS-topic-ARN", "Condition": { "ArnLike": { "aws:SourceArn": "arn:aws:rds:your-AWS-region:your-AWS-account-ID:*" } } } ]}Note: Update the placeholder values in the policy with your values.Verify that your RDS event notification has the correct category selected for the type of event you're expectingChoose the right category for the notification that you want. For example, if you want to receive notifications for instance restarts and shutdowns, then select the availability category and instances as the event source. The availability category covers the following events:"RDS-EVENT-0006 : The DB instance restarted""RDS-EVENT-0004 : DB instance shutdown""RDS-EVENT-0022 : An error has occurred while restarting MySQL or MariaDB"Check the configuration of the event subscription:1.    Open the Amazon RDS console.2.    On the navigation panel, choose Event subscriptions, and then choose your event subscription.3.    In the Event subscription details section of your subscription page, note the values in the following fields: Source type, Sources, and Event categories.4.    Choose the correct source and event category for the type of event that fits your use case.Edit the configuration of the event subscription:Note: The following steps assume a scenario where you want to receive notifications for all instance resources and shutdowns. For more information on the different types of supported events and their categories, see Amazon RDS event categories and event messages.1.    On the navigation panel of the Amazon RDS console, choose Event subscriptions, and then choose your event subscription.2.    On your subscription page, choose Actions, Edit.3.    In the Source section, for Source Type, select Instances.4.    For Instances to include, select All instances.5.    For Event categories to include, select Select specific event categories.6.    For Specific event categories, select availability.7.    Choose Save.Related informationEncrypting messages published to Amazon SNS with AWS KMSFollow"
https://repost.aws/knowledge-center/sns-topics-rds-notifications
"How do I check EKS Anywhere cluster component logs on primary and worker nodes for BottleRocket, Ubuntu, or Redhat?"
I want to check the component logs when the creation of a control plane or data plane machines fails in Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere.
"I want to check the component logs when the creation of a control plane or data plane machines fails in Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere.Short descriptionDuring the creation of an Amazon EKS Anywhere workload cluster, you can check the logs for each machine in the control plane, etcd, and data plane.To check the component logs in each machine, the following conditions must be met:EKS Anywhere is trying to create a workload cluster, and each machine is in the process of creation.Each machine allows you to log in through SSH on the control plane, etcd, and data plane.ResolutionCheck the status of each machine with the $ kubectl get machines command.Example management cluster:$ kubectl get machines -ANAMESPACE NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSIONeksa-system mgmt-etcd-bwnfq mgmt vsphere://4230b0d5-7b14-4753-bd41-3dbe4987dbc4 Running 5h26meksa-system mgmt-etcd-bzm77 mgmt vsphere://4230b278-1fb4-f539-0afe-9f12afebf86b Running 5h26meksa-system mgmt-etcd-dzww2 mgmt vsphere://42309b5a-b0ad-58a5-1e40-5fe39a3d1640 Running 5h26meksa-system mgmt-jw8dl mgmt 10.4.11.19 vsphere://42304059-c833-48d3-9856-7f902c852743 Running 5h26m v1.24.9-eks-1-24-7eksa-system mgmt-md-0-66b858b477-6cbcz mgmt 10.4.35.76 vsphere://4230efad-5d42-c570-36c5-bf9ee92ee011 Running 5h26m v1.24.9-eks-1-24-7eksa-system mgmt-md-0-66b858b477-8h88c mgmt 10.4.19.38 vsphere://4230edbf-db9b-3ae9-a2e6-8421e06863fb Running 5h26m v1.24.9-eks-1-24-7eksa-system mgmt-s7fb7 mgmt 10.4.67.152 vsphere://42301d6f-feb1-d967-9750-148d0823c7b5 Running 5h26m v1.24.9-eks-1-24-7After you check your machines' statuses and verify that you can check their component logs, log in to each machine through SSH. In the following example, user is the SSH login user that's specified in each provider's MachineConfig:$ ssh -i private_key_file user@Machine_IP_addressDepending on your machine's operating system (OS), follow the relevant steps to check its component logs.Note: The control plane section refers to a machine with a name that has the cluster name prefix ("cluster_name-"). The etcd section refers to a machine with a name that has the cluster name and etcd prefix ("cluster_name-etcd-"). The data plane section refers to a machine with a name that has the cluster name and worker node name prefix ("cluster_name-worker_node_name-"). Depending on the ClusterConfig settings, etcd might not have a dedicated machine and instead starts on the control plane.Machines with BottleRocket OSWhen you log in with SSH, you also log in to the admin container. For debugging purposes, obtain root privileges with the following command before you check the logs:$ sudo sheltiecontrol planeFor the kubelet log, run the following command:# journalctl -u kubelet.service --no-pagerFor the containerd log, run the following command:# journalctl -u containerd.service --no-pagerFor the machine initialization log, run the following command:# journalctl _COMM=host-ctr --no-pagerFor each container log, check the logs in the /var/log/containers directory.For Kubernetes kube-apiserver, kube-controller-manager, kube-scheduler, and kube-vip manifests, check the files in the /etc/kubernetes/manifest directory.etcdFor the containerd log, run the following command::# journalctl -u containerd.service --no-pagerFor the machine initialization log, run the following command::# journalctl _COMM=host-ctr --no-pagerFor the etcd log, look in the /var/log/containers directory.data planeFor the kubelet log, run the following command:# journalctl -u kubelet.service --no-pagerFor the containerd log, run the following command:# journalctl -u containerd.service --no-pagerFor the machine initialization log, run the following command:# journalctl _COMM=host-ctrFor each container log, check the logs in the /var/log/containers directory.Note: If you use AWS Snow as your provider, then also check the results of the following commands on each node:# journalctl -u bootstrap-containers@bottlerocket-bootstrap-snow.service# systemctl status bootstrap-containers@bottlerocket-bootstrap-snowMachines with Ubuntu or Red Hat Enterprise Linux OSFor debugging purposes, obtain root privileges with the following command before you check the logs:$ sudo su -control planeFor the kubelet log, run the following command:# journalctl -u kubelet.service --no-pagerFor the containerd log, run the following command:# journalctl -u containerd.service --no-pagerFor the machine initialization log, run the following command:# cat /var/log/cloud-init-output.logFor each container log, check the logs in the /var/log/containers directory.For userdata that initiates when the machine starts, run the following command:# cat /var/lib/cloud/instance/user-data.txtFor Kubernetes kube-apiserver, kube-controller-manager, kube-scheduler, and kube-vip manifests, check the files in the /etc/kubernetes/manifest directory.etcdFor the etcd log, run the following command:# journalctl -u etcd.service --no-pagerFor the machine initialization log, run the following command:# cat /var/log/cloud-init-output.logFor user data that initiates when the machine starts, run the following command:# cat /var/lib/cloud/instance/user-data.txtdata planeFor the kubelet log, run the following command:# journalctl -u kubelet.service --no-pagerFor the containerd log, run the following command:# journalctl -u containerd.service --no-pagerFor the machine initialization log, run the following command:# cat /var/log/cloud-init-output.logFor user data that initiates when the machine starts, run the following command:cat /var/lib/cloud/instance/user-data.txtFor each container log, check the logs in the /var/log/containers directory.Follow"
https://repost.aws/knowledge-center/eks-anywhere-check-component-logs
Can I switch my ACM certificate’s validation method?
I want to change the validation method for my AWS Certificate Manager (ACM) certificate. How do I do that?
"I want to change the validation method for my AWS Certificate Manager (ACM) certificate. How do I do that?Short descriptionAfter submitting a certificate request with ACM, it's not possible to change the validation method.ResolutionIf you want to switch the ACM certificate validation method, you must request a new certificate for the same domain using your preferred validation method. After your new certificate is issued, associate it to the resources used with the previous certificate.For more information on requesting a new certificate, see Request a public certificate.Note: DNS validation has several advantages over email validation, especially if Amazon Route 53 is the DNS provider for your domain. For more information, see Using DNS to validate domain ownership.To identify which resources the previous ACM certificate was associated with, see Describe ACM Certificates.For replacing your ACM certificate on your load balancer or Amazon CloudFront distribution, see:Update an HTTPS listener for your Application Load BalancerReplace the SSL certificate for your Classic Load BalancerUpdate a TLS listener for your Network Load BalancerRotating SSL/TLS certificatesRelated informationWhy am I not receiving validation emails when using ACM to issue or renew a certificate?Why is my AWS Certificate Manager (ACM) certificate DNS validation status still pending validation?Follow"
https://repost.aws/knowledge-center/switch-acm-certificate
How do I troubleshoot fine-grained access control issues in my OpenSearch Service cluster?
I'm experiencing access control errors or issues in my Amazon OpenSearch Service cluster.
"I'm experiencing access control errors or issues in my Amazon OpenSearch Service cluster.Short descriptionYou might receive fine-grained access control (FGAC) errors, or require additional configuration in your OpenSearch Service cluster. To resolve these issues, follow these troubleshooting steps for your use case.Note: Because of the managed design of OpenSearch Service, anonymous access isn't supported.Resolution"security_exception","reason":"no permissions" 403 errorsTo resolve this error, first check if the user or backend role in your OpenSearch Service cluster has the required permissions. See Permissions on the OpenSearch website. Then, complete the steps from the OpenSearch website to map the user or backend role to a role."User: anonymous is not authorized to perform: iam:PassRole"You might receive this error when you try to register a manual snapshot. You must map the manage_snapshots role to Identity and Access Management (IAM) role that you used to register the manual snapshot. Then, use that IAM role to send a signed request to the domain."Couldn't find any Elasticsearch data"You might receive this error when you try to create index patterns after upgrading to OpenSearch Service version 7.9. Use the resolve index API to add indices:admin/resolve/index to all indices and aliases when creating an index pattern in an FGAC activated cluster. For more information, see API on the OpenSearch website.When this permission is missing, OpenSearch Service throws a 403 error status code. This is then mapped to a 500 error status code from OpenSearch Dashboards. As a result, the indices aren't listed.401 unauthorized errorsYou might receive a 401 unauthorized error when you use the $ or ! characters in primary credentials with curl -u "user:password". Make sure to put your credentials in single quotes, as in the following example:curl -u 'username' <Domain_Endpoint>Integrate other AWS services with OpenSearch Service when fine-grained access control is activatedTo integrate another AWS service with OpenSearch Service when fine-grained access control is activated, give the correct permissions to the IAM roles for those services. For more information, see Integrations.Provide fine-grained access to specific indices, dashboards, and visualizations based on user tenancyTo provide FGAC access to specific indices or dashboards, map the user to a role that has permissions to the tenant's Kibana index:.kibana_<hash>_<tenant_name>For more information, see Manage OpenSearch Dashboards indices on the OpenSearch website.Use fine-grained access control at a field-level or document-levelTo use fine-grained access control at the field level, set up a role with the required field-level security. Then, map the user to the role that you created. For more information, see Field-level security on the OpenSearch website.To use fine-grained access control at the document level, create an internal dashboard role with the required document-level security. Then, map the user to the internal dashboard. For more information, see Document-level security on the OpenSearch website.Related informationFine-grained access control in Amazon OpenSearch ServiceFollow"
https://repost.aws/knowledge-center/opensearch-fgac-errors
How do I install a software package from the Extras Library on an EC2 instance running Amazon Linux 2?
How do I install a software package (known as a topic) from the amazon-linux-extras repository on an Amazon Elastic Compute Cloud (Amazon EC2) instance that's running Amazon Linux 2?
"How do I install a software package (known as a topic) from the amazon-linux-extras repository on an Amazon Elastic Compute Cloud (Amazon EC2) instance that's running Amazon Linux 2?Short descriptionTo install a software package from the Extras Library, first confirm that the amazon-linux-extras repository is installed on your instance. Then, list the available software packages, enable the one you're looking for, and then install the package using yum.Note: This resolution is for Amazon Linux 2. These steps don't apply to Amazon Linux 1 2018.03.Resolution1.    Connect to your EC2 Linux instance using SSH.2.    Use the which command to confirm that the amazon-linux-extras package is installed:$ which amazon-linux-extras/usr/bin/amazon-linux-extrasIf the amazon-linux-extras package isn't installed, use yum to install it:$ sudo yum install -y amazon-linux-extras3.    List the available topics.Note: The repository is updated regularly, so the topics and versions that you see might differ from the following list.$ amazon-linux-extras0 ansible2 available [ =2.4.2 =2.4.6 ]2 httpd_modules available [ =1.0 ]3 memcached1.5 available [ =1.5.1 ]4 nginx1.12 available [ =1.12.2 ]5 postgresql9.6 available [ =9.6.6 =9.6.8 ]6 postgresql10 available [ =10 ]8 redis4.0 available [ =4.0.5 =4.0.10 ]9 R3.4 available [ =3.4.3 ]10 rust1 available [ =1.22.1 =1.26.0 =1.26.1 =1.27.2 =1.31.0 ]11 vim available [ =8.0 ]13 ruby2.4 available [ =2.4.2 =2.4.4 ]15 php7.2 available [ =7.2.0 =7.2.4 =7.2.5 =7.2.8 =7.2.11 =7.2.13 =7.2.14 =7.2.16 ]16 php7.1 available [ =7.1.22 =7.1.25 =7.1.27 ]17 lamp-mariadb10.2-php7.2 available [ =10.2.10_7.2.0 =10.2.10_7.2.4 =10.2.10_7.2.5 =10.2.10_7.2.8 =10.2.10_7.2.11 =10.2.10_7.2.13 =10.2.10_7.2.14 =10.2.10_7.2.16 ]18 libreoffice available [ =5.0.6.2_15 =5.3.6.1 ]19 gimp available [ =2.8.22 ]20 docker=latest available [ =17.12.1 =18.03.1 =18.06.1 ]21 mate-desktop1.x available [ =1.19.0 =1.20.0 ]22 GraphicsMagick1.3 available [ =1.3.29 ]23 tomcat8.5 available [ =8.5.31 =8.5.32 =8.5.38 ]24 epel available [ =7.11 ]25 testing available [ =1.0 ]26 ecs available [ =stable ]27 corretto8 available [ =1.8.0_192 =1.8.0_202 ]28 firecracker available [ =0.11 ]29 golang1.11 available [ =1.11.3 ]30 squid4 available [ =4 ]31 php7.3 available [ =7.3.2 =7.3.3 ]32 lustre2.10 available [ =2.10.5 ]33 java-openjdk11 available [ =11 ]34 lynis available [ =stable ]4.    Enable the desired topic. The output shows the commands required for installation. For example, to enable the PHP 7.2 topic, use the following command:$ sudo amazon-linux-extras enable php7.20 ansible2 available [ =2.4.2 =2.4.6 ]2 httpd_modules available [ =1.0 ]3 memcached1.5 available [ =1.5.1 ]4 nginx1.12 available [ =1.12.2 ]5 postgresql9.6 available [ =9.6.6 =9.6.8 ]6 postgresql10 available [ =10 ]8 redis4.0 available [ =4.0.5 =4.0.10 ]9 R3.4 available [ =3.4.3 ]10 rust1 available [ =1.22.1 =1.26.0 =1.26.1 =1.27.2 =1.31.0 ]11 vim available [ =8.0 ]13 ruby2.4 available [ =2.4.2 =2.4.4 ]15 php7.2=latest enabled [ =7.2.0 =7.2.4 =7.2.5 =7.2.8 =7.2.11 =7.2.13 =7.2.14 =7.2.16 ]_ php7.1 available [ =7.1.22 =7.1.25 =7.1.27 ]17 lamp-mariadb10.2-php7.2 available [ =10.2.10_7.2.0 =10.2.10_7.2.4 =10.2.10_7.2.5 =10.2.10_7.2.8 =10.2.10_7.2.11 =10.2.10_7.2.13 =10.2.10_7.2.14 =10.2.10_7.2.16 ]18 libreoffice available [ =5.0.6.2_15 =5.3.6.1 ]19 gimp available [ =2.8.22 ]20 docker=latest available [ =17.12.1 =18.03.1 =18.06.1 ]21 mate-desktop1.x available [ =1.19.0 =1.20.0 ]22 GraphicsMagick1.3 available [ =1.3.29 ]23 tomcat8.5 available [ =8.5.31 =8.5.32 =8.5.38 ]24 epel available [ =7.11 ]25 testing available [ =1.0 ]26 ecs available [ =stable ]27 corretto8 available [ =1.8.0_192 =1.8.0_202 ]28 firecracker available [ =0.11 ]29 golang1.11 available [ =1.11.3 ]30 squid4 available [ =4 ]_ php7.3 available [ =7.3.2 =7.3.3 ]32 lustre2.10 available [ =2.10.5 ]33 java-openjdk11 available [ =11 ]34 lynis available [ =stable ]Now you can install: # yum clean metadata # yum install php-cli php-pdo php-fpm php-json php-mysqlnd5.    Install the topic using yum. For example, to install the PHP 7.2 topic, use the following command:$ sudo yum clean metadata && sudo yum install php-cli php-pdo php-fpm php-json php-mysqlnd6.    Use the following commands to verify the installation and confirm the software version:$ yum list installed php-cli php-pdo php-fpm php-json php-mysqlndLoaded plugins: extras_suggestions, langpacks, priorities, update-motdInstalled Packagesphp-cli.x86_64 7.2.16-1.amzn2.0.1 @amzn2extra-php7.2php-fpm.x86_64 7.2.16-1.amzn2.0.1 @amzn2extra-php7.2php-json.x86_64 7.2.16-1.amzn2.0.1 @amzn2extra-php7.2php-mysqlnd.x86_64 7.2.16-1.amzn2.0.1 @amzn2extra-php7.2php-pdo.x86_64 7.2.16-1.amzn2.0.1 @amzn2extra-php7.2 $ which php/usr/bin/php $ php --versionPHP 7.2.16 (cli) (built: Apr 3 2019 18:39:35) ( NTS )Copyright (c) 1997-2018 The PHP GroupZend Engine v3.2.0, Copyright (c) 1998-2018 Zend TechnologiesRelated informationAmazon Linux 2Follow"
https://repost.aws/knowledge-center/ec2-install-extras-library-software
Why can't I access a specific folder or file in my Amazon S3 bucket?
I can't access a certain prefix or object that's in my Amazon Simple Storage Service (Amazon S3) bucket. I can access the rest of the data in the bucket. How can I fix this?
"I can't access a certain prefix or object that's in my Amazon Simple Storage Service (Amazon S3) bucket. I can access the rest of the data in the bucket. How can I fix this?Short descriptionCheck the following permissions for any settings that are denying your access to the prefix or object:Ownership of the prefix or objectRestrictions in the bucket policyRestrictions in your AWS Identity and Access Management (IAM) user policyPermissions to object encrypted by AWS Key Management Service (AWS KMS)Also, note the following:If the object is encrypted using an AWS managed KMS key, only the AWS account that encrypted the object can read it.If permissions boundaries and sessions policies are defined, the maximum permissions of the requester can be impacted. As a result, there might also be impact to object access.Restrictions might be specified in other policies such as VPC endpoint policies and service control policies (SCPs). Therefore, check these policies and update them accordingly.You can also control ownership of uploaded objects using Amazon S3 Object Ownership. If Object Ownership is set to "BucketOwnerPreferred", objects that are newly written by other accounts with the bucket-owner-full-control canned ACL transition to the bucket owner.ResolutionOwnership of the prefix or objectBy default, an S3 object is owned by the AWS account that uploaded it. This is true even when the bucket is owned by another account. If other accounts can upload to your bucket, follow these steps to get permissions to the object or prefix that you can't access:1.    Run this AWS Command Line Interface (AWS CLI) command to get the Amazon S3 canonical ID for your account:aws s3api list-buckets --query Owner.IDNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.2.    Run this command to get the Amazon S3 canonical ID of the account that owns the object you can't access:aws s3api list-objects --bucket DOC-EXAMPLE-BUCKET --prefix index.html3.    If the canonical IDs don't match, then you (the bucket owner) don't own the object. For an individual object, the object owner can grant you full control by running this put-object-acl command:aws s3api put-object-acl --bucket DOC-EXAMPLE-BUCKET --key object-name --acl bucket-owner-full-controlFor objects within a prefix, the object owner must re-copy the prefix and grant you full control of the objects as part of the operation. For example, the object owner can run this cp command with the --acl bucket-owner-full-control parameter:aws s3 cp s3://DOC-EXAMPLE-BUCKET/abc/ s3://DOC-EXAMPLE-BUCKET/abc/ --acl bucket-owner-full-control --recursive --storage-class STANDARDTip: You can use a bucket policy to require that other accounts grant you ownership of objects they upload to your bucket.Restrictions in the bucket policy1.    Open the Amazon S3 console.2.    From the list of buckets, open the bucket with the policy that you want to review.3.    Choose the Permissions tab.4.    Choose Bucket policy.5.    Search for statements with "Effect": "Deny". Then, review those statements for references to the prefix or object that you can't access.For example, this bucket policy denies everyone access to the abc/* prefix in DOC-EXAMPLE-BUCKET:{ "Version": "2012-10-17", "Statement": [ { "Sid": "StatementPrefixDeny", "Effect": "Deny", "Principal": { "AWS": "*" }, "Action": "s3:*", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/abc/*" } ]}6.     Modify the bucket policy to edit or remove any "Effect": "Deny" statements that are incorrectly denying you access to the prefix or object.Restrictions in your IAM user policy1.    Open the IAM console.2.    From the console, open the IAM user or role that you're using to access the prefix or object.3.    In the Permissions tab of your IAM user or role, expand each policy to view its JSON policy document.4.    In the JSON policy documents, search for policies related to Amazon S3 access. Then, search those policies for any "Effect": "Deny" statements that are blocking your access to the prefix or object.For example, the following IAM policy has an "Effect": "Deny" statement that blocks the IAM identity's access to the prefix abc/* within DOC-EXAMPLE-BUCKET. Then, the policy also has an "Effect": "Allow" statement that grants access to DOC-EXAMPLE-BUCKET. Despite the allow statement for the entire bucket, the explicit deny statement prevents the IAM identity from accessing the prefix abc/*.{ "Version": "2012-10-17", "Statement": [ { "Sid": "StatementPrefixDeny", "Effect": "Deny", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET/abc/*" ] }, { "Sid": "StatementFullPermissionS3", "Effect": "Allow", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET", "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" ] } ]}5.    Modify the policy to edit or remove any "Effect": "Deny" statements that are incorrectly denying you access to the prefix or object.Permissions to object encrypted by AWS KMSIf an object is encrypted with an AWS KMS key, then you need permissions to both the object and the key. Follow these steps to check if you can't access the object because you need permissions to an AWS KMS key:1.    Use the Amazon S3 console to view the properties of one of the objects that you can't access. Review the object's Encryption properties.2.    If the object is encrypted with a custom AWS KMS key (KMS key), then review the KMS key policy. Confirm that the key policy allows your IAM identity to perform the following KMS actions:"Action": ["kms:Decrypt"]3.    If your IAM identity is missing permissions to any of these actions, modify the key policy to grant the missing permissions.Important: If your IAM identity and KMS key belong to different accounts, then check to make sure that you have proper permissions. Both your IAM and key policies must grant you permissions to the required KMS actions.Related informationWhy can't I access an object that was uploaded to my Amazon S3 bucket by another AWS account?How can I grant a user access to a specific folder in my Amazon S3 bucket?Do I need to specify the AWS KMS key when I download a KMS-encrypted object from Amazon S3?Follow"
https://repost.aws/knowledge-center/s3-access-file-folder
Why can’t I connect to my Amazon EC2 instance using Session Manager?
I can't access my Amazon Elastic Compute Cloud (Amazon EC2) instance using AWS Systems Manager Session Manager.
"I can't access my Amazon Elastic Compute Cloud (Amazon EC2) instance using AWS Systems Manager Session Manager.ResolutionAccess to an instance using Session Manager can fail due to the following reasons:Incorrect session preferencesAWS Identity and Access Management (IAM) permission issuesHigh resource usage on the instanceIf you can't connect to Session Manager, then review the following to troubleshoot the issue:Verify Systems Manager prerequisitesConfirm that the instance appears as a managed instance, and then verify that all Session Manager prerequisites are met. For more information, see Why is my EC2 instance not displaying as a managed node or showing a "Connection lost" status in Systems Manager?AWS KMS configuration issuesReview the Session Manager error messages to determine the type of issue. Then, follow the relevant troubleshooting steps to resolve the issue.Error: "Encountered error while initiating handshake. Handshake timed out. Please ensure that you have the latest version of the session manager plugin"AWS Key Management Service (AWS KMS) encryption is activated in Session Manager preferences and the instance can't reach the AWS KMS endpoints.Run the following command to verify connectivity to AWS KMS endpoints. Replace RegionID with your AWS Region.$ telnet kms.RegionID.amazonaws.com 443For more information and for instructions to connect to the AWS KMS endpoints, see Connecting to AWS KMS through a VPC endpoint.Error: "Encountered error while initiating handshake. Fetching data key failed: Unable to retrieve data key, Error when decrypting data key AccessDeniedException"Confirm that the instance profile or user has the required kms:Decrypt permission for the AWS KMS key that is used to encrypt the session. For more information, see Adding Session Manager permissions to an existing instance profile.Error: "Invalid Keyname:Your session has been terminated for the following reasons: NotFoundException: Invalid keyId xxxx"Verify that the AWS KMS key Amazon Resource Name (ARN) that is specified in the Session Manager preferences to encrypt the session is valid. View the available key ARNs, and then confirm that the ARN specified in Session Manager preferences matches one of the available ARNs. For more information, see Finding the key ID and ARN.RunAs user name is not validError: "Invalid RunAs username"-or-Error: "Unable to start shell: failed to start pty since RunAs user xyz does not exist"Session Manager fails with these errors if Enable Run As support for Linux instances specifies an operating system user name that isn't valid.To fix this issue, provide a valid operating system user name (for example, ubuntu, ec2-user, or centos). The operating system user can be specified by either configuring the session manager preferences or by tagging the IAM user or role that starts the session with the tag key of SSMSessionRunAs and value of os-user-account-name. For more information, see Turn on run as support for Linux and macOS managed nodes.Or, you can clear Enable Run As support for Linux instances.Blank screen displays after starting a sessionWhen you start a session, Session Manager displays a blank screen. For troubleshooting steps, see Blank screen displays after starting a session.Other troubleshootingFor more information and other troubleshooting scenarios, see How do I troubleshoot issues with AWS Systems Manager Session Manager?Related informationTroubleshooting Session ManagerHow can I use an SSH tunnel through AWS Systems Manager to access my private VPC resources?Turn on SSH connections through Session ManagerLogging session activityFollow"
https://repost.aws/knowledge-center/ssm-session-manager-connect-fail
Why did the master user for my RDS for SQL Server instance lose access and how can I gain it back?
"The master user for my Amazon Relational Database Service (Amazon RDS) for SQL Server instance lost access. Or, I need to grant the master user access to a database created by another user. What do I do to restore access or grant access to my master user?"
"The master user for my Amazon Relational Database Service (Amazon RDS) for SQL Server instance lost access. Or, I need to grant the master user access to a database created by another user. What do I do to restore access or grant access to my master user?Short descriptionWhen you create a new DB instance, the default master user automatically receives certain privileges for that DB instance. You can't change the master user name after the DB instance is created.Note: It's a best practice not to use the master user directly in your applications. Instead, use a database user created with the minimal privileges required for your application.If you accidentally delete the master user's permissions, you can restore them by modifying the DB instance and setting a new master user password. For more information, see Master user account privileges.ResolutionThe following are common scenarios that might lead to the master user losing access connecting to the DB instance. Or the master user might not be able to connect to and access a specific user database.Scenario 1: The master user can't connect to the DB instance due to an explicit DENYThe master user might not be able to connect to the DB instance because an explicit DENY is set on the Connect SQL privilege. By default, the master user is granted Connect SQL by the Amazon RDS System Administrator (rdsa) login. However, in Microsoft SQL Server, an explicit DENY takes precedence over an explicit GRANT.To fix this, do the following:1.    Connect to RDS for SQL Server using the login of the grantor who placed the explicit DENY on Connect SQL for the master user login.2.    Use the following T-SQL command to revoke the explicit DENY. In the following example, the RDS master user login is master_user and the grantor principal is grantor_principal. Change these values to match your use case.USE [master];GOREVOKE CONNECT SQL TO [master_user] AS [grantor_principal];GOScenario 2: The master user can't connect to a specific database because it isn't mapped to a user in the databaseThis might occur in the following circumstances:The database was created by another login account. And, the master user login isn't mapped to a database user in the database and granted privileges to the database.The database user previously mapped to the master user login with proper permissions was explicitly deleted.To resolve this issue, reset the master user password. Resetting the password creates a database user mapped to the master user login if that user was deleted. It also grants the db_owner fixed database role to the user. For instructions on resetting the master user password, see How do I reset the master user password for my Amazon RDS DB instance?Note: The AWS Identify and Access Management (IAM) user resetting the password must have permission to perform the ModifyDBInstance action on the RDS resource.Updating the master user password does the following:Grants the master user the db_owner database-level role to a database created by another user.Restores system privileges to the master user.Restores server-level roles to the master user.Restores server-level permissions to the master user.Restores access to system stored procedures to the master user.Restores access to RDS-specific stored procedures to the master user.Scenario 3: The master user can't perform certain actionsThe master user has db_owner role permission on the database but perform certain actions, such as CONNECT, SELECT, INSERT, UPDATE, ALTER, and so on. This might occur when the database user mapped to the master user login was explicitly denied certain permissions on the database.To see the list of database roles, and which database users are members of those roles, run the following T-SQL command. In the following replace database_namewith the correct values for your use case.USE [database_name]; GO SELECT DP1.name AS DatabaseRoleName, isnull (DP2.name, 'No members') AS DatabaseUserName FROM sys.database_role_members AS DRM RIGHT OUTER JOIN sys.database_principals AS DP1 ON DRM.role_principal_id = DP1.principal_id LEFT OUTER JOIN sys.database_principals AS DP2 ON DRM.member_principal_id = DP2.principal_idWHERE DP1.type = 'R'ORDER BY DP1.name;Run the following commands to see the list of permissions a user has in a particular database. In the following example, replace database_name with the correct value for your use case.USE [database_name];GOEXECUTE AS USER = 'master_user';SELECT * FROM sys.fn_my_permissions(NULL, 'DATABASE');GOIn this example, the master user is added to the db_denydatawriter and db_denydatareader fixed database roles. Despite being a member of the db_owner fixed database role, the deny privileges of db_denydatawriter and db_denydatareader prohibit SELECT, INSERT, UPDATE and DELETE permissions on the database.To resolve this issue:1.    Log in to the RDS for SQL Server instance using the master user.2.    Use the following T-SQL command to drop the master user as a member of these two roles:USE [database_name];GOALTER ROLE [db_denydatawriter] DROP MEMBER [master_user];ALTER ROLE [db_denydatareader] DROP MEMBER [master_user];GOAfter the commands complete, the master user has SELECT, INSERT, UPDATE, and DELETE permissions on the database restored.For more information about the specific roles the master user has, see Master user account privileges.Related informationResetting the db_owner role passwordMicrosoft SQL Server securityDENY (Transact-SQL)Follow"
https://repost.aws/knowledge-center/rds-sql-server-restore-master-user
How do I plan an upgrade strategy for an Amazon EKS cluster?
"When I upgrade my Amazon Elastic Kubernetes Service (Amazon EKS) cluster, I want to make sure that I follow best practices."
"When I upgrade my Amazon Elastic Kubernetes Service (Amazon EKS) cluster, I want to make sure that I follow best practices.Short descriptionNew versions of Kubernetes technology introduce significant changes to your Amazon EKS cluster. After you upgrade a cluster, you can’t downgrade it. Therefore, for a successful transition to a newer version of Kubernetes, follow the best practices that are outlined in this upgrade plan.When you upgrade to a newer Kubernetes version, you can migrate to new clusters instead of performing in-place cluster upgrades. In this case, cluster backup and restore tools like VMware’s Velero can help you migrate to a new cluster. For more information, see Velero on GitHub.To see current and past versions of Kubernetes that are available for Amazon EKS, see the Amazon EKS Kubernetes release calendar.ResolutionPreparing for an upgradeBefore you begin your cluster upgrade, note the following requirements:Amazon EKS requires up to five free IP addresses from the subnets that you specified when you created your cluster.Make sure that the cluster's AWS Identity and Access Management (IAM) role and security group exist in your AWS account.If you activate secrets encryption, then make sure that the cluster IAM role has permission to use the AWS Key Management Service (AWS KMS) key.Review major updates for Amazon EKS and KubernetesReview all documented changes for the version that you’re upgrading to, and note any required upgrade steps. Also, note any requirements or procedures that are specific to Amazon EKS managed clusters.Refer to the following resources for any major updates to Amazon EKS clusters platform versions and Kubernetes versions:Updating an Amazon EKS cluster Kubernetes versionAmazon EKS Kubernetes versionsAmazon EKS platform versionsFor more information on Kubernetes upstream versions and major updates, see the following documentation on the Kubernetes website and GitHub:Kubernetes release notesKubernetes changelogUnderstand the Kubernetes deprecation policyWhen an API is upgraded, the earlier API is deprecated and eventually removed. To understand how APIs might be deprecated in a newer version of Kubernetes, read the deprecation policy on the Kubernetes website.To check whether you use any deprecated API versions in your cluster, use the Kube No Trouble (kubent) on GitHub. If you do use deprecated API versions, then upgrade your workloads before you upgrade your Kubernetes cluster.To convert Kubernetes manifest files between different API versions, use the kubectl convert plugin. For more information, see Install kubectl convert plugin on the Kubernetes website.What to expect during a Kubernetes upgradeWhen you upgrade your cluster, Amazon EKS launches new API server nodes with the upgraded Kubernetes version to replace the existing nodes. If any of these checks fail, then Amazon EKS reverts the infrastructure deployment, and your cluster remains on the previous Kubernetes version. However, this rollback doesn’t affect any applications that are running, and you can recover any clusters, if needed. During the upgrade process, you might experience minor service interruptions.Upgrading the control plane and data planeUpgrading an Amazon EKS cluster requires updating 2 main components: the control plane (master nodes) and the data plane (worker nodes). When you upgrade these components, keep the following consideration in mind.In-place Amazon EKS cluster upgradesFor in-place upgrades, you can upgrade only to the next highest Kubernetes minor version. If there are multiple versions between your current cluster version and the target version, then you must upgrade to each version sequentially. For each in-place Kubernetes cluster upgrade, you must complete the following tasks:Upgrade the cluster control plane.Upgrade the nodes in your cluster.Update your Kubernetes add-ons and custom controllers, as required.Update your Kubernetes manifests, as required.For more information, see Planning and executing Kubernetes version upgrades in Amazon EKS in Planning Kubernetes upgrades with Amazon EKS.Blue/green or canary Amazon EKS clusters migrationA blue/green or canary upgrade strategy is more complex, but it allows upgrades with easy rollback capability and no downtime. For a blue/green or canary upgrade, see Blue/green or canary Amazon EKS clusters migration for stateless ArgoCD workloads.Upgrading Amazon EKS managed node groupsImportant: A node’s kubelet can’t be newer than kube-apiserver. Also, it can’t be more than two minor versions earlier than kube-apiserver. For example, suppose that kube-apiserver is at version 1.24. In this case, a kubelet is supported only at versions 1.24, 1.23, and 1.22.To completely upgrade your managed node groups, follow these steps:1.    Upgrade your Amazon EKS cluster control plane components to the latest version.2.    Update your worker nodes in the managed node group.Migrating to Amazon EKS managed node groupsIf you use self-managed node groups, then you can migrate your workload to Amazon EKS managed node groups with no downtime. For more information, see Seamlessly migrate workloads from EKS self-managed node group to EKS-managed node groups.Identifying and upgrading downstream dependencies (add-ons)Clusters often contain many outside products such as ingress controllers, continuous delivery systems, monitoring tools, and other workflows. When you update your Amazon EKS cluster, you must also update your add-ons and third-party tools. Be sure to understand how add-ons work with your cluster and how they’re updated.Note: It’s a best practice to use managed add-ons instead of self-managed add-ons.See the following examples of common add-ons and their relevant upgrade documentation:Amazon VPC CNI: For the best version of the Amazon VPC CNI add-on to use for each cluster version, see Updating the Amazon VPC CNI plugin for Kubernetes self-managed add-on. Also, see Update the aws-node daemonset to use IAM roles for service accounts in the Amazon EKS best practices guide on GitHub.kube-proxy self-managed add-on: Be sure to update to the latest available kube-proxy container image version for each Amazon EKS cluster version. For more information, see Updating the Kubernetes kube-proxy self-managed add-on.CoreDNS: Be sure to update to the latest available CoreDNS container image version for each Amazon EKS cluster version. For more information, see Updating the CoreDNS self-managed add-on.AWS Load Balancer Controller: Versions 2.4.0 or later of AWS Load Balancer Controller require Kubernetes versions 1.19 or later. For more information, see AWS Load Balancer Controller releases on GitHub. For installation information, Installing the AWS Load Balancer Controller add-on.Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver: Versions 1.1.0 or later of the Amazon EBS CSI driver require Kubernetes versions 1.18 or later. For more information, see Amazon EBS CSI driver releases on GitHub. For installation and upgrade information, see Managing the Amazon EBS CSI driver as an Amazon EKS add-on.Amazon Elastic File System (Amazon EFS) Container Storage Interface (CSI) driver: Versions 1.3.x or later of the Amazon EFS CSI driver require Kubernetes versions 1.17 or later. For more information, see Amazon EFS CSI driver releases on GitHub. For installation and upgrade information, see Amazon EFS CSI driver.Upgrading AWS Fargate nodesTo update a Fargate node, delete the pod that the node represents. Then, after you update your control plane, redeploy the pod. Any new pods that you launch on Fargate have a kubelet version that matches your cluster version. Existing Fargate pods aren't changed.Note: To keep Fargate pods secure, Amazon EKS must periodically patch them. Amazon EKS tries to update the pods in a way that reduces impact. However, if pods can't be successfully evicted, then Amazon EKS deletes them. To minimize disruption, see Fargate pod patching.Upgrading groupless nodes that are created by KarpenterWhen you set a value for ttlSecondsUntilExpired, this activates node expiry. After nodes reach the defined age in seconds, Amazon EKS deletes them. This is true even if they’re in use. This allows you to replace nodes with newly provisioned instances, and therefore upgrade them. When a node is replaced, Karpenter uses the latest Amazon EKS optimized AMIs. For more information, see Deprovisioning on the Karpenter website.The following example shows a node that’s deprovisioned with ttlSecondsUntilExpired, and therefore replaced with an upgraded instance:apiVersion: karpenter.sh/v1alpha5kind: Provisionermetadata: name: defaultspec: requirements: - key: karpenter.sh/capacity-type # optional, set to on-demand by default, spot if both are listed operator: In values: ["spot"] limits: resources: cpu: 1000 # optional, recommended to limit total provisioned CPUs memory: 1000Gi ttlSecondsAfterEmpty: 30 # optional, but never scales down if not set ttlSecondsUntilExpired: 2592000 # optional, nodes are recycled after 30 days but never expires if not set provider: subnetSelector: karpenter.sh/discovery/CLUSTER_NAME: '*' securityGroupSelector: kubernetes.io/cluster/CLUSTER_NAME: '*'Note: Karpenter doesn’t automatically add jitter to this value. If you create multiple instances in a short amount of time, then they expire near the same time. To prevent excessive workload disruption, define a pod disruption budget, as shown in Kubernetes documentation.Follow"
https://repost.aws/knowledge-center/eks-plan-upgrade-cluster
How do I pass parameters from a scheduled trigger in EventBridge to an AWS Batch job?
I want to pass parameters from a scheduled trigger in Amazon EventBridge to an AWS Batch job.
"I want to pass parameters from a scheduled trigger in Amazon EventBridge to an AWS Batch job.Short descriptionIn AWS Batch, your parameters are placeholders for the variables that you define in the command section of your AWS Batch job definition. These placeholders allow you to:Use the same job definition for multiple jobs that use the same format.Programmatically change values in the command at submission time.It's a best practice to define your parameter as a key-value pair. For example:"Parameters" : {"test" : "abc"}If you register a job definition or submit a job, then use parameter substitution placeholders in the command field of the job's container properties. For example:"Command" : [ "echo” "Ref::test" ]When you submit the preceding job, the Ref::test argument in the container's command is replaced with the default value (abc).You can define a different parameter value for the same parameter key when you submit a job. For example:"Parameters" : {"test" : "hello"}When you submit the preceding job, the Ref::test argument in the container’s command is replaced with the custom value (hello) that you defined during job submission.ResolutionSet up your AWS Batch environment1.    Create a compute environment.2.    Create a job queue, and then associate your job queue with the compute environment that you created in step 1.3.    Create a job definition with an image (For example: nginx).Create an EventsBridge ruleImportant: You must use camel case for JSON text in your EventBridge rules.1.    Open the EventBridge console.2.    Select Create rule.3.    Enter a Name for your rule. You can optionally enter a Description.4.    In Define pattern, select Event pattern or Schedule, based on your use case.5.    In Select event bus, select the default option of AWS default event bus.6.    In Select targets, choose Batch job queue from the Target dropdown list.7.    For Job queue, enter the ARN of the job queue that you created earlier.8.    For Job definition, enter the name of the job definition that you created earlier.9.    For Job name, enter a name for your job.10.    Expand the Configure input section, and select Constant (JSON text).11.    In the text box that appears, enter the following:{"Parameters": {"name":"test"}, "ContainerOverrides": { "Command": ["echo","Ref::name"] } }The rule submits an AWS Batch job when EventBridge invokes the rule. If the job is successful, your CloudWatch logs print "test" at the following locations:Log Group: /aws/batch/jobLog Stream: yourJobDefinitionName/default/your-ecs-task-ID12.    Select Create.Related informationCreating Amazon EventBridge rules that react to eventsCreating an Amazon EventBridge rule that runs on a scheduleFollow"
https://repost.aws/knowledge-center/batch-parameters-trigger-eventbridge
How can I move an Amazon RDS DB instance from a public subnet to private subnet within the same VPC?
"I have an Amazon Relational Database Service (Amazon RDS) DB instance that is in a public subnet. I want to move my DB instance from a public to a private subnet within the same VPC, and make my DB instance completely private. How can I do this?"
"I have an Amazon Relational Database Service (Amazon RDS) DB instance that is in a public subnet. I want to move my DB instance from a public to a private subnet within the same VPC, and make my DB instance completely private. How can I do this?Short descriptionAmazon RDS doesn't provide an option to change the subnet group of your DB instance, within the same VPC. However, you can use the workaround method in this article to move your DB instance from a public subnet to a private subnet. Performing this action makes your DB instance private.This method has a number of advantages, including:You don't need to create a new DB instanceYou don't need to use the snapshot-restore processIt minimizes the downtime involved in creating a new instance and diverting traffic. The only downtime that you see is the failover time.ResolutionTurn off Multi-AZ deployments and public accessibility on your DB instanceIf your DB instance is already set to Single-AZ with the Public accessibility parameter set to No, then skip this step.To modify your DB instance to turn off Multi-AZ deployments, follow these steps:Sign in to the Amazon RDS console.From the navigation pane, choose Databases, and then choose the DB instance that you want to modify.Choose Modify.From the Modify DB Instance page, for Multi-AZ deployment and Public accessibility, choose No.Choose Continue, and then review the summary of modifications.Choose Apply immediately to apply your changes.Review your changes, and if correct, choose Modify DB Instance to save.Discover the IP address of your DB instanceAfter your DB instance returns to the Available state, run dig on the DB instance's endpoint to find its underlying IP address:dig <rds-endpoint>Output:db-RDS-instance.xxxxxxxx.us-east-1.rds.amazonaws.com. 5 IN A 172.39.5.213From the private IP, you can find which subnet your primary instance is using.In this example, the list of subnet CIDR is as follows:subnet1 -> 172.39.5.0/24subnet2 -> 172.39.4.0/24Because the IP is falling under 179.39.5.0/24, you can conclude that the instance is placed in subnet1.Remove the public subnets and add private subnets on your DB instanceAdd all the required private subnets in the subnet group. Also, delete all public subnets from the subnet group except for the one that is used by your primary. In the previous example, you delete everything except subnet1 because it is used by your DB instance.Note: A private subnet is a subnet that is associated with a route table that has no route for an internet gateway.Sign in to the Amazon RDS console.From the navigation pane, choose Subnet groups, and then choose the subnet group that is associated with your DB instance.Choose Edit.From the Add subnets section, choose the Availability Zone and private subnets that you want to add.Select the public subnets that you want to delete, and then choose Remove.Choose Save.Turn on Multi-AZ on your DB instanceModify the DB instance to turn on the Multi-AZ deployment. The new secondary launches in one of the remaining private subnets.Reboot your DB instance with failover and turn off Multi-AZ deploymentWhen your DB instance fails over, the secondary, which is using the private IP, becomes the primary and the public subnet becomes the secondary.After you reboot your DB instance with failover, remove the secondary, which is now in the public subnet. To do this, modify the DB instance to turn off Multi-AZ, again. You can do this by setting Multi-AZ deployment to No.Remove the public subnetRemove the remaining public subnet from the subnet group.Note: Removing subnets from the subnet group is a configuration from the RDS side. It doesn't involve deleting any subnets from the VPC.Check that there are only private subnets in the subnet group.If your DB instance was previously in Multi-AZ deployment, then turn Multi-AZ deployment on again.This solution involves failover and turning on/turning off Multi-AZ so there are few things to consider. For more information, see Multi-AZ DB instance deployments.Note: This method is specific for RDS DB instances. If your DB instance is part of Aurora cluster, then you can use the clone option. Or you can follow the steps in this article, but instead of turning off Multi-AZ, you should delete and recreate the readers.Related informationMulti-AZ DB cluster deploymentsFollow"
https://repost.aws/knowledge-center/rds-move-to-private-subnet
How can I resolve root certificate file errors when using foreign data wrappers and SSL verify-full on Amazon RDS/Aurora PostgreSQL?
"I'm using Foreign Data Wrappers (FDW) and sslmode that is set to verify-full on Amazon Relational Database Service (Amazon RDS) running PostgreSQL. When I try to create an FDW server for my DB instance, I receive the following error: "root certificate file "/home/rdsdb/.postgresql/root.crt" does not exist". How can I resolve this error?"
"I'm using Foreign Data Wrappers (FDW) and sslmode that is set to verify-full on Amazon Relational Database Service (Amazon RDS) running PostgreSQL. When I try to create an FDW server for my DB instance, I receive the following error: "root certificate file "/home/rdsdb/.postgresql/root.crt" does not exist". How can I resolve this error?Short descriptionTo enable certificate verification in PostgreSQL, sslmode must be set to verify-full. If the sslmode is set to verify-full when you create an FDW server from one Amazon RDS instance to another, you receive the root certificate file error. This error is generated on the DB instance where the CREATE SERVER command is run. You can't access the filesystem on an Amazon RDS instance directly or install the CA certificates, but the required root certificate is already installed on the DB instance. To find the location of the certificate, run the following command:postgres=> show ssl_cert_file;ssl_cert_file-----------------------------------------/rdsdbdata/rds-metadata/server-cert.pem(1 row)To resolve this error, point the FDW connection to the /rdsdbdata/rds-metadata/server-cert.pem file when creating the server.ResolutionTo point the FDW connection to the root certificate file, run a command similar to the following:CREATE SERVER my_foreign_db foreign data wrapper postgres_fdw options (host 'my_db.xyz.eu-west-1.rds.amazonaws.com', port '5432', dbname 'my_db', sslmode 'verify-full', sslrootcert '/rdsdbdata/rds-metadata/server-cert.pem');To confirm that the connection is working, create a user mapping and foreign table:Note: PostgreSQL logs passwords in cleartext in the log files. To prevent this, review How can I stop Amazon RDS for PostgreSQL from logging my passwords in clear-text in the log files?CREATE USER MAPPING FOR dbuser SERVER my_foreign_db OPTIONS (user 'dbuser', password 'dbpasswd');CREATE FOREIGN TABLE foreign_table ( id integer not null, name character(84)) SERVER my_foreign_db OPTIONS (schema_name 'public', table_name 'my_table');A connection isn't made until the table is accessed. To confirm that the connection is working, query the table:SELECT * from foreign_table ;If the FDW connection is successful, the data from the foreign table is returned.Related informationCommon management tasks for PostgreSQL on Amazon RDSPostgreSQL Documentation for postgres_fdwFollow"
https://repost.aws/knowledge-center/root-certificate-file-rds-postgresql
How do I set up Time to Live (TTL) in DynamoDB?
I want to set up Time to Live (TTL) on my Amazon DynamoDB table.
"I want to set up Time to Live (TTL) on my Amazon DynamoDB table.ResolutionAmazon DynamoDB TTL allows you to define a per-item timestamp to determine when an item is no longer needed. After the expiration of the TTL timestamp, DynamoDB deletes the item from your table within 48 hours without consuming any write throughput. The time taken to delete the items might vary depending on the size and activity level of your table.To set up TTL, see Enabling Time to Live. When you create a TTL attribute in a table, keep the following in mind:TTL attributes must use the Number data type. Other data types, such as String, aren't supported.TTL attributes must use the epoch time format. For example, the epoch timestamp for October 28, 2019 13:12:03 UTC is 1572268323. You can use a free online converter, such as EpochConverter, to get the correct value.Note: Be sure that the timestamp is in seconds, not milliseconds (for example, use 1572268323 instead of 1572268323000).Related informationTime to LiveTables, items, and attributesFollow"
https://repost.aws/knowledge-center/ttl-dynamodb
How do I use the Reserved Instance coverage report in Cost Explorer?
I want to use the Cost Explorer reports to understand the coverage of my Reserved Instances (RIs).
"I want to use the Cost Explorer reports to understand the coverage of my Reserved Instances (RIs).ResolutionYou can use the RI coverage reports in Cost Explorer to do the following:View the utilization of individual RIs in the chart. You can view your RI utilization in either hours or normalized units. For more information on viewing RI utilization in either hours or normalized units, see Reserved Instance reports.View the number of instance hours or normalized units covered by your RIs in the chart. You can use this information to track your required instance hours or units and how many of those are covered by RIs.View the number of hours or normalized units covered by RIs against the On-Demand table. You can use this information to understand how much you spent on On-Demand Instances and how much you could have saved had you purchased more reservations.Select a single RI or a group of RIs in the table to view their respective coverage in the table.Define a threshold for the coverage you want from RIs, called the coverage target, to see where you can reserve more RIs.To define the coverage target, enter the preferred value for Coverage target, and select Show target line on chart. You can see the target coverage on the chart as a dotted line and the average coverage in the table as a status bar.Instances with a red status bar have no RI coverage.Instances with a yellow status bar are under your set coverage target.Instances with a green status bar have met your coverage target.Instances with a gray bar aren't using your reservations.Related informationRI coverage reportsFollow"
https://repost.aws/knowledge-center/cost-explorer-reserved-instance-coverage
How do I change my Lightsail plan?
I want to change my current Amazon Lightsail plan to a different plan.
"I want to change my current Amazon Lightsail plan to a different plan.ResolutionYour Lightsail plan is based on the Lightsail bundle that you're using.Upgrading the planTo upgrade your Lightsail plan to a larger instance, take a snapshot, and then create a larger instance from the snapshot.Downgrading the planMoving the Lightsail plan to a smaller instance isn't supported. In Lightsail instances, the data is stored in fixed-sized SSD disks. The data in SSD isn't stored uniformly. There are no methods to downsize an SSD volume because that process can cause data loss and data corruption. This means that it's not possible to automatically move from a larger Lightsail plan to a smaller one.To migrate to a smaller instance size, you must manually copy the site contents to a smaller instance by doing the following:Create a new, smaller Lightsail instance.Back up and restore the data from the old instance to the new instance. For detailed instructions on how to make a backup and restore your applications on Bitnami stacks, see Create and restore application backups. If this method doesn't fit your application, then contact your application's support team to understand how to move the data to a smaller instance.Note: Keep the following in mind when changing your Lightsail plan:Make sure that you're taking a complete backup of the existing Lightsail instance as a snapshot.Download the backup and then upload it to a new instance using SFTP/FTP or data stores such as S3 or Lightsail object storage.If you have a static IP address, be sure to detach it from your old instance. Then, attach it to your new instance.If there's an SSL certificate installed in the old instance, check to see if the certificate files are outside the /opt/bitnami directory. If they are, then you must manually copy the certificate to the new instance.The credentials files in the Bitnami instance also must be copied from the old instance to the new one. These are credential files such as /home/bitnami/bitnami_credentials and /home/bitnami/bitnami_application_password.Related informationExtending the storage space of your Windows Server instance in Amazon LightsailFollow"
https://repost.aws/knowledge-center/change-lightsail-plan
How can I improve the indexing performance on my Amazon OpenSearch Service cluster?
I want to optimize indexing operations in Amazon OpenSearch Service for maximum ingestion throughput. How can I do this?
"I want to optimize indexing operations in Amazon OpenSearch Service for maximum ingestion throughput. How can I do this?ResolutionBe sure that the shards are distributed evenly across the data nodes for the index that you're ingesting intoUse the following formula to confirm that the shards are evenly distributed:Number of shards for index = k * (Number of data nodes), where k is the number of shards per nodeFor example, if there are 24 shards in the index, and there are eight data nodes, then OpenSearch Service assigns three shards to each node. For more information, see Get started with Amazon OpenSearch Service: How many shards do I need?Increase the refresh_interval to 60 seconds or moreRefresh your OpenSearch Service index so that your documents are available for search. Note that refreshing your index requires the same resources that are used by indexing threads.The default refresh interval is one second. When you increase the refresh interval, the data node makes fewer API calls. The refresh interval can be shorter or faster, depending on the length of the refresh interval. To prevent 429 errors, it's a best practice to increase the refresh interval.Note: The default refresh interval is one second for indices that receive one or more search requests in the last 30 seconds. For more information about the updated default interval, see _refresh API version 7.x on the Elasticsearch website.Change the replica count to zeroIf you anticipate heavy indexing, consider setting the index.number_of_replicas value to "0." Each replica duplicates the indexing process. As a result, disabling the replicas improves your cluster performance. After the heavy indexing is complete, reactivate the replicated indices.Important: If a node fails while replicas are disabled, you might lose data. Disable the replicas only if you can tolerate data loss for a short duration.Experiment to find the optimal bulk request sizeStart with the bulk request size of 5 MiB to 15 MiB. Then, slowly increase the request size until the indexing performance stops improving. For more information, see Using and sizing bulk requests on the Elasticsearch website.Note: Some instance types limit bulk requests to 10 MiB. For more information, see Network limits.Use an instance type that has SSD instance store volumes (such as I3)I3 instances provide fast and local memory express (NVMe) storage. I3 instances deliver better ingestion performance than instances that use General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volumes. For more information, see Petabyte scale for Amazon OpenSearch Service.Reduce response sizeTo reduce the size of OpenSearch Service's response, use the filter_path parameter to exclude unnecessary fields. Be sure that you don't filter out any fields that are required to identify or retry failed requests. Those fields can vary by client.In the following example, the index-name, type-name, and took fields are excluded from the response:curl -XPOST "es-endpoint/index-name/type-name/_bulk?pretty&filter_path=-took,-items.index._index,-items.index._type" -H 'Content-Type: application/json' -d'{ "index" : { "_index" : "test2", "_id" : "1" } }{ "user" : "testuser" }{ "update" : {"_id" : "1", "_index" : "test2"} }{ "doc" : {"user" : "example"} }For more information, see Reducing response size.Increase the value of index.translog.flush_threshold_sizeBy default, index.translog.flush_threshold_size is set to 512 MB. This means that the translog is flushed when it reaches 512 MB. The weight of the indexing load determines the frequency of the translog. When you increase index.translog.flush_threshold_size, the node performs the translog operation less frequently. Because OpenSearch Service flushes are resource-intensive operations, reducing the frequency of translogs improves indexing performance. By increasing the flush threshold size, the OpenSearch Service cluster also creates fewer large segments (instead of multiple small segments). Large segments merge less often, and more threads are used for indexing instead of merging.Note: An increase in index.translog.flush_threshold_size can also increase the time that it takes for a translog to complete. If a shard fails, then recovery takes more time because the translog is larger.Before increasing index.translog.flush_threshold_size, call the following API operation to get current flush operation statistics:curl -XPOST "os-endpoint/index-name/_stats/flush?pretty"Replace the os-endpoint and index-name with your respective variables.In the output, note the number of flushes and the total time. The following example output shows that there are 124 flushes, which took 17,690 milliseconds:{ "flush": { "total": 124, "total_time_in_millis": 17690 }}To increase the flush threshold size, call the following API operation:$ curl -XPUT "os-endpoint/index-name/_settings?pretty" -d "{"index":{"translog.flush_threshold_size" : "1024MB"}}"In this example, the flush threshold size is set to 1024 MB, which is ideal for instances that have more than 32 GB of memory.Note: Choose the appropriate threshold size for your OpenSearch Service domain.Run the _stats API operation again to see whether the flush activity changed:$ curl _XGET "os-endpoint/index-name/_stats/flush?pretty"Note: It's a best practice to increase the index.translog.flush_threshold_size only for the current index. After you confirm the outcome, apply the changes to the index template.Related informationBest practices for Amazon OpenSearch ServiceFollow"
https://repost.aws/knowledge-center/opensearch-indexing-performance
How do I apply a resource-based policy on an AWS Secrets Manager secret?
How can I control access to AWS Secrets Manager secrets using resource-based policies?
"How can I control access to AWS Secrets Manager secrets using resource-based policies?Short descriptionWith resource-based policies, you can specify user access to a secret and what actions an AWS Identity and Access Management (IAM) user can perform.Note: A secret is defined as a resource with Secrets Manager.Common use cases for Secrets Manager resource-based policies are:Sharing a secret between AWS accounts.Enforcing permissions, such as adding an explicit deny to the secret.In this example resource-based policy, the IAM element Effect specifies whether the statement results in allow or an explicit deny. The IAM Action element defines the actions that are performed with the secret. The IAM Resource element is the secret that the policy is attached to. The IAM Principal element specifies the user with access to perform actions with the secret.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "secretsmanager:*", "Principal": {"AWS": "arn:aws:iam::123456789999:user/Mary"}, "Resource": "*" } ]}ResolutionFollow these instructions to apply a resource-based policy in Secrets Manager:Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.1.    Follow the instructions for creating a secret. Note the Secret ARN.2.    Copy and paste this policy into your favorite text editor, and then save it as a JSON file such as my_explicit_deny_policy.json.{ "Version": "2012-10-17","Statement": [ { "Effect": "Deny", "Action": "secretsmanager:GetSecretValue", "Principal": {"AWS": "arn:aws:iam::123456789999:user/Mary"}, "Resource": "*" } ]}3.    Use the AWS CLI command put-resource-policy to place a resource policy for the secret to explicitly deny IAM user Mary from retrieving the secret value.aws secretsmanager put-resource-policy --secret-id My_Resource_Secret --resource-policy file:// My_explicit_deny_Policy.json4.    You receive an output similar to the following:{"ARN": "arn:aws:secretsmanager:<your region>:123456789999:secret:My_Resource_Secret","Name": "My_Resource_Secret"}Note: The AWS Key Management Service (AWS KMS) decrypt permission is required only if you use AWS KMS keys to encrypt your secret. A secret can't be retrieved by an IAM principal in a third-party account if the secret is encrypted by the default AWS KMS key.For more information, see Using resource-based policies for Secrets Manager.Follow"
https://repost.aws/knowledge-center/secrets-manager-resource-policy
How do I access Amazon SNS topic delivery logs for push notifications?
I want to access Amazon Simple Notification Service (Amazon SNS) topic delivery logs for push notifications.
"I want to access Amazon Simple Notification Service (Amazon SNS) topic delivery logs for push notifications.Short descriptionBefore you completed the following steps, confirm that you're using SNS endpoints supported by Amazon SNS for log delivery status of notification messages:HTTP and HTTPsAmazon Kinesis Data FirehoseAWS LambdaPlatform application endpointAmazon Simple Queue Service (Amazon SQS)SMSNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.ResolutionConfigure push notification delivery status attributes for Amazon CloudWatch LogsNote: As an alternative to the following console steps, you can configure message delivery status using AWS SDKs or the AWS CLI.1.    Open the Amazon SNS console.2.    On the navigation menu, expand Mobile, and then choose Push notifications.3.    In the Platform applications section, select the platform application that you want to have delivery status for.4.    Choose Edit.5.    Expand Delivery status logging – optional.6.    For Success sample rate, in the % text box, enter 100.7.    In the IAM roles section, for Service role, select Create new service role, and then choose Create new roles. The AWS Identity and Access Management (IAM) console opens.Note: If you already have an IAM role with the right permissions, then you can use that service role by selecting Use existing service role instead.8.    On the IAM console permission request page, choose Allow.9.    After returning to the Amazon SNS console, choose Save changes.Now, an IAM role is created for successful and failed deliveries with the following policy and trust relationships for Amazon SNS. See the following examples:IAM role for successful deliveries:arn:aws:iam::1111111111:role/SNSSuccessFeedbackIAM role for failed deliveries:arn:aws:iam::1111111111:role/SNSFailureFeedbackPolicy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "logs:PutMetricFilter", "logs:PutRetentionPolicy" ], "Resource": [ "*" ] } ]}Trust relationships:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "sns.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}View delivery status logs1.    Open the Amazon CloudWatch console.2.    On the navigation pane, expand Logs, and then choose Log groups.3.    In the Filter search box, enter sns to find only log groups for Amazon SNS.The sns/your-AWS-region/your-account_ID/app/platform_name/application_name log group contains the successful delivery logs.sns/us-east-1/1111111111/app/GCM/Test1sns/us-east-1/1111111111/app/APNS_SANDBOX/Test2sns/us-east-1/1111111111/app/APNS/Test3The sns/your-AWS-region/your-account_ID/app/platform_name/application_name**/Failure** log group contains the failure delivery logs:sns/us-east-1/1111111111/app/GCM/Test1/Failuresns/us-east-1/1111111111/app/APNS_SANDBOX/Test2/Failuresns/us-east-1/1111111111/app/APNS/Test3/Failure4.    Choose the Amazon SNS log group that you want to view.5.    On the Log streams tab, choose a particular log stream to view the application endpoint delivery logs.Consider the following:You can't add a prefix to the streams in CloudWatch Logs.You can't directly change the default log group name for Amazon SNS.The notification content isn't written to your CloudWatch logs. That is, the SNS topic delivery logs don't log the notification content to CloudWatch, but write only the metadata to CloudWatch.If you publish on an SNS topic that has SMS and platform application endpoints, the delivery status logs are still populated for these endpoints in their respective log group.Troubleshoot notification failuresLook up the statusCode with the provider service, such as FCM or APNs. For the provider's exact response message, view the providerResponse.For a list of push notification service response codes, see Platform response codes.Related informationMobile app attributesFollow"
https://repost.aws/knowledge-center/troubleshoot-failed-sns-deliveries