Description
stringlengths
6
76.5k
Question
stringlengths
1
202
Link
stringlengths
53
449
Accepted
bool
2 classes
Answer
stringlengths
0
162k
Is DataAPI (and the query editor) support for Aurora Serverless v2 or v3 planned?The docs simply state "The Data API and query editor aren't supported for Aurora Serverless v2.".FollowComment
Is DataAPI for Aurora v2 / v3 planned?
https://repost.aws/questions/QUc6tQ-_TsRWeSXAxSlqTPEg/is-dataapi-for-aurora-v2-v3-planned
true
"7Accepted AnswerHi,Thanks for reaching here. I have searched internally that there is an existing feature request for the same. Data API for serverless v2. It is just that we do not have ETA when this feature will be announced. Please stay updated through:[+] https://aws.amazon.com/blogs/aws/category/database[+] https://aws.amazon.com/about-aws/whats-newCommentShareSUPPORT ENGINEERKevin_Zanswered a year agoEXPERTsdtslmnreviewed 8 days agorePost-User-7375555 8 months agoAny update on this, please, it's been 4 months...SharerePost-User-4433514 2 days agoCan we get an update on this please? This was 8 months ago.Share5Can you confirm whether or not it will be before the end of life of serverless v1 data api? Seems like a disaster for those using it if its not before the end of that (approx 7 months from now)CommentShareamason1389answered a year ago0The documentation from the email is confusing me.Is it possible to continue running "Amazon Aurora Serverless version 1" while updgrading from "Amazon Aurora MySQL v1" to "Amazon Aurora MySQL v2"?Ie. If i upgrade from mysql 5.6 to 5.7 while on serverless, do I continue to be able to use the RDS data api?CommentShareAshanswered 4 months ago0We need Data API for V2 ASAP. We'd like to migrate our systems to V2, but that will require doing whole bunch of things, including moving some resources like lambdas to VPCs to have access to RDS Proxy, that increases costs and slows downs our services.Please, add Data API option we to focus on business logic, not on maintaining that architecture.Thanks.CommentSharerePost-User-9003191answered 2 months ago"
"I am using Microsoft AD managed directory and I have checked that the bundle is not graphic or performance, so auto enable IP shouldn't affect it.It was working fine, then when I try to relaunch I am seeing this error:There was an issue joining the WorkSpace to your domain. Verify that your service account is allowed to complete domain join operations. If you continue to see an issue, contact AWS Support.Launch bundle is Power with Windows 10 and Office 2016 Pro Plus (Server 2016 based) (PCoIP)Could it be that my root user has added too many Workspaces to the domain? As we are currently only using one account and doing it manually...ThanksFollowComment"
Why am I seeing a domain join error when launching an already created workspace?
https://repost.aws/questions/QUBHI3jg4DRGS2D5cHmZK9MA/why-am-i-seeing-a-domain-join-error-when-launching-an-already-created-workspace
false
"0Hello there,Could you please elaborate on what you mean by when launching an already created workspace? The error message you posted is usually seen when launching/creating a new Workspace from the Workspaces bundle. When you create a Workspace in AWS Managed AD which is directly registered with Workspaces service, you should not get error related to service account. Are you using AD connector pointing to the AWS Managed AD domain?CommentShareMayank_Janswered a month agorePost-User-3120279 a month agoSo, I have a workspace created already (for example last user active is populated with a date in April) however we decided to rebuild the workspace as it stopped.Now, every time I try to rebuild I see this error. No, the only Directory Service I have set up is Microsoft Managed AD, which was set up in 2018. We are doing all this with a root account, could this be the issue?ShareMayank_J a month agoIs the issue intermittent or consistent? Are you able to launch new Workspaces in the same directory. You will need to open a support case for further assistance on this as we will need to review logs from the backend.SharerePost-User-3120279 a month agoThe issue is consistent when trying to rebuild this particular workspace, however I am able to create new ones. I have seen this issue before and it magically started working. Is there a way we can see logs? Its frustrating that things randomly break and we cannot debug without contacting support, we also only have basic support.ThanksShare-1Check that the AD service account's password didn't expireCommentShareGerman Rizoanswered a month agoMayank_J a month agoWhen using AWS Managed AD you don't need to specify any service account. It uses a reserved service account created by the directory service itself for domain join and customers do not have access to that account.Share"
"I have a public Route 53 root hosted zone redacted.com which has CAA authority for amazon.com, which has valid NS records for two hosted zones in different accounts; dev.redacted.com and qa.redacted.com. When I request a ACM certificate in the qa subaccount, the CNAME validation records are not generated/visible. The exact same process and configuration works correctly and as expected in the dev subaccount. There are no other certificates or records which are interfering with the generation and name resolution is working correctly.How can I retrieve the DNS domain validation options or otherwise debug the problem? Thanks!FollowComment"
No domain validation options generated for Certificate
https://repost.aws/questions/QUrbuOruhJRMGDEyiZKzpoSw/no-domain-validation-options-generated-for-certificate
false
"we have deployed Apache Spark into a kubernetes cluster by our own.In the past, in EMR, setting "hive.metastore.client.factory.class" was enough to use glue catalog. Unfortunattely, In our own deployment, Spark don't see glue databases. No exception is logged by Spark.Our configuration:spark = SparkSession.builder().config("hive.metastore.client.factory.class", "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory").enableHiveSupport()The Client Factory .jar package we built from: https://github.com/awslabs/aws-glue-data-catalog-client-for-apache-hive-metastorecould someone help?Best regards,FollowComment"
Integrate Glue Catalog with own Spark Application deployed on EKS
https://repost.aws/questions/QUpuNHsCHsQQyc8Z76kUo76A/integrate-glue-catalog-with-own-spark-application-deployed-on-eks
false
"0Hello,Assuming that you have built the jars as mentioned in the instructions https://github.com/awslabs/aws-glue-data-catalog-client-for-apache-hive-metastore  for your specific Spark version.I was able to successfully connect to my Glue catalog tables by following the below stepsA Spark Docker image I have built and pushed to an ECR repo, following the instructions provided[1].A new Spark Docker image I have built by including the Glue Hive catalog client jars mentioned on the GitHub page, on top of the previously I have created base Spark image. This patched image was also pushed to the ECR repo.An EKS cluster was created, along with a namespace and service account specifically for Spark jobs.I have downloaded spark on my computer and wrote a small pyspark script to read from my Glue tableFinally, I have used the below “spark-submit” command which ran successfullyspark-submit --master k8s://https://<Kubernetes url> --deploy-mode cluster --name spark-pi --conf spark.executor.instances=1 --conf spark.kubernetes.container.image=<IMAGE_NAME> --conf spark.kubernetes.namespace=<NAMESPACE> --conf spark.kubernetes.executor.request.cores=1 --conf spark.hive.metastore.client.factory.class=com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory --conf spark.hive.metastore.glue.catalogid=<AWS ACCOUNT ID> --conf spark.hive.imetastoreclient.factory.class=com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory --conf spark.kubernetes.file.upload.path=s3a://Bucket/ --conf spark.kubernetes.authenticate.driver.serviceAccountName=<SERVICE ACCOUNT NAME> script.pyHope this information helps!--Reference--[1]https://spark.apache.org/docs/latest/running-on-kubernetes.html#:~:text=It%20can%20be%20found%20in,use%20with%20the%20Kubernetes%20backendCommentShareSUPPORT ENGINEERDurga_Banswered 2 months ago"
"I am getting "ConnectTimeoutError: Connect timeout on endpoint URL: "https://redshift-data.us-east-1.amazonaws.com/" in AWS Glue python shell job when I am using boto3 clients redshift-data APIs. Below are the boto3 APIs I use in python script.client = boto3.client('redshift-data')response = client.execute_statement( ClusterIdentifier=redshift_cluster, Database=redshift_db, DbUser=redshift_db_user, Sql=sql, StatementName=stmt_name)response = client.list_statements( MaxResults=2, NextToken='', RoleLevel=True, StatementName=stmt_name, Status='ALL' )I am not sure why boto3 client is trying to access "https://redshift-data.us-east-1.amazonaws.com/" endpoint. If I run this script local machine (after setting aws secrets in environment vars), it runs successfully. The issue appears only with Glue job.The IAM role of Glue job has permissions AWSGlueServiceRole, AmazonRedshiftFullAccess, AmazonRedshiftAllCommandsFullAccess and AmazonRedshiftDataFullAccess.Any body has idea about this?FollowComment"
Using boto3 client redshift-data APIs in AWS Glue python shell job gives ConnectTimeoutError error
https://repost.aws/questions/QUwYyb4-YxThm5JJv9XcwpMg/using-boto3-client-redshift-data-apis-in-aws-glue-python-shell-job-gives-connecttimeouterror-error
false
"0According to this AWS Documentation, we can understand that whenever you try to connect to Redshift programmatically then it will inherently make use of the endpoint depending upon your region.Please do ensure that a connection is attached to your Glue job such that it is able to reach the endpoint. You can add a network connection to your Glue job mentioning the VPC and subnet. Please do ensure that the Glue job has access to reach the redshift endpoint through the subnet mentioned. Please do attach a private subnet with NAT gateway to the Glue job.Make sure that security group attached to Glue job has a self referencing inbound rule.Make sure that the security group of redshift cluster is allowing inbound traffic from the security group of Glue job. If it is not, then add an inbound rule to the redshift cluster's security group.Please refer this article for more details.CommentShareSUPPORT ENGINEERChaituanswered 10 months agoEXPERTFabrizio@AWSreviewed 10 months ago"
"Hi,Is there any API to get the change rates used by AWS is the invoices?Where does the change rates used by AWS come from?Thanks for your helpFollowComment"
How can I get the change rate between USD and EUR used by AWS
https://repost.aws/questions/QUzwFKJB6iSuKNlGeBfKVtRQ/how-can-i-get-the-change-rate-between-usd-and-eur-used-by-aws
false
"1Hi FabienG,At the moment there is no API for the exchange rates used by AWS to generate invoices, so you cannot pull this info programmatically. The exchange rate will likely be different on different invoices. If you pay in currency different from USD, the exchange rate is usually displayed on the invoice which you can download from the Bills page of the billing console of your account for a specific month.The specific rate for each currency is not visible in the AWS Billing and Cost Management console, but you can see the estimated conversion rate for your current bill in the billing console after you have chosen a payment currency from the My Account section of the console, and you are using a Visa or MasterCard as your default form of payment for the account. You can change the currency conversion at any time before the invoice is generated because the currency used for each invoice is determined at the time of creation. For past bills, you can use the "Orders and invoices" section of the Billing and Cost Management console. Because exchange rates fluctuate daily, your estimated conversion rate fluctuates daily as well until the invoice is finalized. For most charges, the invoice is finalized between the 3rd and 8th of the month that follows billable usage.I hope this info helps!CommentShareEXPERTNataliya_Ganswered a year agoFabienG 7 months agoHi,Any API planned to get the exchange rate ?RegardsShareMus_B 3 months agoHi Nataliya_G,Do you know if an API Exchange Rate feature is in the pipeline. I would be very interested in incorporating this into my billing automation process.Regards,MusShare-1Hi Fabien,This article explains what currencies are supported and how to change the displayed currency:https://aws.amazon.com/premiumsupport/knowledge-center/non-us-currency/— Brian D.CommentShareEXPERTAWS Support - Briananswered a year ago"
"I used the generic role for creating a new instance, it errored out:The instance profile aws-elasticbeanstalk-ec2-role associated with the environment does not exist.I added new permissions and even gave it administrator permissions, I'm still getting the same error:The instance profile aws-elasticbeanstalk-ec2-role associated with the environment does not exist.I know that Beanstalk is more complicated to get going, but this is getting bad.FollowComment"
The instance profile aws-elasticbeanstalk-ec2-role associated with the environment does not exist.
https://repost.aws/questions/QURMEc7-pmT0OT4-ui2u55mg/the-instance-profile-aws-elasticbeanstalk-ec2-role-associated-with-the-environment-does-not-exist
false
"0Hi,The error looks similar to the following problem reported by another user a few days ago. Can you check if the solution is also useful for you? If not, can you provide us with more information?CommentShareMikel Del Tioanswered 14 days ago"
"Hello,Starting today our codebuild project, which has been working flawlessly for months and not modified has stopped working.Failure occurs in DOWNLOAD_SOURCE phase, with this message: "SINGLE_BUILD_CONTAINER_DEAD: Build container found dead before completing the build. Build container died because it was out of memory, or the Docker image is not supported."The applications code is hosted on CodeCommit.We did not do any modification to the buildspec file, and the very same build was even still working yesterday.We are using aws/codebuild/amazonlinux2-x86_64-standard:2.0 images, and did not do any modifications but the failure suddenly happened today. During the previous build the job was using at most 20% of the available memory (15GB)Sadly nothing gets logged because it fails even before processing the first commands of the buildspec so it's nearly impossible to debug.Do you guys have any ideas on what could cause this ?Thanks,DanielEdited by: drolland on Oct 20, 2020 2:29 AMEdited by: drolland on Oct 20, 2020 2:30 AMFollowComment"
Codebuild suddenly fails in DOWNLOAD_SOURCE phase
https://repost.aws/questions/QU_e_YC5MtQde7071Pc3Qp8g/codebuild-suddenly-fails-in-download-source-phase
false
"1So I finally found the reason by creating some test build, so I'm sharing the solution I found below.SolutionIf you are using multiple sources for your build project, get sure that Git Clone Depth is set to 1 for all your sources. <br>Git clone depth defaults to "Full", so if you have built your project with CloudFormation get sure to include GitCloneDepth: 1 in your template SecondarySources list.ExplanationNot sure about what has changed internally, but there seems to be some kind of timeout when cloning the repos, even though the error message is unclear about that. <br> As of today (2020-10-20), this error is reproductible if you have multiple repos containing enough commits, by creating an empty test project and including 3 additional sources. The build will fail with the same error message before starting.Once I was able to reproduce the issue, I've tried fiddling around with clone depth and it finally worked. Again, I'm not sure if this is temporary or will be fixed, but as of yesterday (2020-10-19), this workaround was not necessary.Edited by: drolland on Oct 20, 2020 5:37 AMCommentSharedrollandanswered 3 years agorePost-User-2213230 9 months agoYou are a lifesaver mate. It indeed was a git clone depth issue. When I set it to 1, the code build job started working again. In my case, even the logs were not getting printed in the cloudwatch. Thanks a ton!Share0Hi drolland,Would you PM me build ARNs of affected builds? Preferably one before and one after you noticed this behavior change. I'd like to help investigate this further.CommentShareAWS-User-6139167answered 3 years ago"
"Hi,Trying to Deploy an ECS service running a single task with the image: datalust/seq:latest. I want to specify certain arguments to my container, how do I achieve the equivalent of running the following local command:docker run --name seq -e ACCEPT_EULA=Y -p 5341:5341 -p 80:80 datalust/seq:latestUsing ECS Fargate. My CDK code is below: image: ContainerImage.fromRegistry('datalust/seq:latest'), containerName: `SeqContainer`, command: [`--name`, `seq`,`-v`, `/efs:/efs`], essential: true, cpu: 1024, environment: { ACCEPT_EULA: 'Y' }, portMappings: [ { containerPort: 80, hostPort: 80 }, { containerPort: 5341, hostPort: 5341 } ], logging: new AwsLogDriver({ logGroup: new LogGroup(this, 'SeqLogGroup', { logGroupName: '/aws/ecs/SeqLogGroup', removalPolicy: RemovalPolicy.DESTROY }), streamPrefix: 'SeqService' }) })However when I run with this config, it fails to start my task properly, it seems that the Command specification overrides the default behaviour of the container, whereas I just want to supply arguments to the container as an addition to the default behaviour.FollowComment"
CDK ECS Fargate - how to specify arguments to Docker container?
https://repost.aws/questions/QUx8ALedH3TZiemMQKNGkHcA/cdk-ecs-fargate-how-to-specify-arguments-to-docker-container
false
"1The command parameter of the task definition is to specify the command to run in your container at startup. The other parameters that you send to the docker cli (i.e. -e, -v, -p) usually map to other parameters of the task definition.You properly used environment for -e and portMappings for -p. For -v you need to use volumes and mountPoints in your Task Definition. You don't need the --name parameter, it is specified in containerName.See this link for more information on how to use EFS volumes with ECS: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/efs-volumes.htmlSee this for full reference of task definition parameters: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.htmlCommentShareogaumondanswered 10 months ago"
"Hi guys,I have an Amazon EC2 instance (micro) that generates ZIP files and uploads them to S3. For the past few years I haven't really had any issues but recently the instance has occasionally been crashing during busier periods when more ZIPs are being generated.I think I will resize the instance to medium but wondering if the volume attached to the instance will need to be increased as well? The current volume is a General Purpose SSD 30GB 100 / 3000.Queue Length Graph during busy period here: https://ibb.co/0FF7VX3ThanksEdited by: ajh18 on Sep 16, 2019 8:48 PMEdited by: ajh18 on Sep 16, 2019 11:29 PMFollowComment"
EC2 - VolumeQueueLength
https://repost.aws/questions/QUB7tOq9IAQnerjfdD4RH2kg/ec2-volumequeuelength
false
"0Hi,I think the answer is "it depends". The VolumeQueueLength during the peak times is definitely showing a bottleneck. However, because you stated that your "Zips" are crashing, the crashes could possibly be due to memory shortage, which may result in "page file swapping", which may result in overloading your disk I/O to swap in/out chunks of process memory. So, it is possible that by moving to a t2.medium, you will now have adequate RAM so that swap will be 0 or minimal, which may result in no Disk I/O bottlenecks.To keep costs minimal, I would take the following approach.1. Increase the instance size from t2.micro to t2.medium2. Monitor for performance, VolumeQueueLength, and reliability3. If you are still noticing performance issues, and the VolumeQueueLength continues to show a bottleneck during the peak times, then increase the Volume size (which will increase the IOPs).Hope this helps,-randyCommentShareRandyTakeshitaanswered 4 years ago0Thanks Randy! I'll resize to medium and continue to monitor.CommentShareajh18answered 4 years ago"
"I am trying to start a simple ECS service using the nginx image. Just a single task with no LB. I launched it from a VPC with a subnet (1 of 7). The VPC is connected to a internet gateway. I configured the task to have only private IP. Oh, the run time is Fargate.The result is that the task get stuck in "Pending" state for ever.If I enable the automatically assigned public IP, the task runs without any problem.What could be the problem?FollowComment"
ECS: Unable to start task from within a private subnet without enabling public IP
https://repost.aws/questions/QUj7KTig-SRrm2kofIvjfKXA/ecs-unable-to-start-task-from-within-a-private-subnet-without-enabling-public-ip
false
"1Look at step 5 here: https://docs.aws.amazon.com/AmazonECS/latest/userguide/service-configure-network.htmlYou need to have network access from the ECS task to ECR to retrieve the container image.You can achieve it by assigning a public IP address to the task or having a NAT gateway in your VPC.Alternatively, if you don't want to pull ECR image across Internet, you can also use a VPC endpoint (https://docs.aws.amazon.com/AmazonECR/latest/userguide/vpc-endpoints.html#ecr-vpc-endpoint-considerations)CommentShareRichardFananswered a year agoAWS-User-3532884 a year agoThanks for the reply. I though having an internet gateway is enough? As in, igw does what NAT Gateway does in this case?ShareRichardFan a year agoInternet gateway alone is not enough, the ECS task needs a public IP address to connect to the internet.Share1As Richard said, it's just down to your networking configuration. Either you need toA) run the tasks in a subnet that use the InternetGateway as the route to "all" (0.0.0.0/0) and enable EIP auto-assign on your tasksorB) run containers which use a NAT Gateway to 0.0.0.0/0 as the route to 0.0.0.0/0 (container -> nat -> internet -> ECR)orC) run the container in subnet(s) that have a VPC Endpoint to ECR (you need two, one for api, one for ECR) in which case you get, container -> endpoint -> ECRIf you are going to have big workloads (i.e, download images constantly, large ones in the mix) then using the VPC endpoint with ECR can reduce your bill significantly over using a NAT Gateway.If your image is in DockerHub/Quay.io, then you must either go with A or B, but C won't work for these images, unless you create a repository in ECR and enable cachethrough, but that's "even more involved"If you are starting with AWS and ECS, you could something simple such as, this docker-compose file, i.e. docker-compose.yamlversion: 3.8networks: public: x-vpc: PublicSubnetsservices: nginx: image: nginx # or URL to your ECR repo x-network: AssignPublicIp: True Ingress: ExtSources: - IPv4: 0.0.0.0/0 Description: ANY networks: public: {} ports: - published: 80 target: 80 protocol: tcpand then run (on Linux/MacOS. Not sure about the source compose-x/bin/activate on Windows)python3 -m venv compose-xsource compose-x/bin/activatepip install pip -U; pip install ecs-compose-x;ecs-compose-x initecs-compose-x up -n my-first-ecs-app -d templates/ -f docker-compose.yamlThis will generate all the CFN templates you need to create a new VPC (without a NAT Gateway, saving some $$ for a PoC), create the service and put it in the public subnets, with AssignPublicIp set to True to get an IP address.Hope this helps your journey. See this for more details on the above.CommentShareJohnPrestonanswered a year agoAWS-User-3532884 a year agothanks. The subnet that the task is running in does have a default route to the igw.Share"
"Hello,I'm using SES and trying to use the {{amazonSESUnsubscribeUrl}} feature to automatically manage opt-outI created an Email List and am sending some emails with it specified in the SendEmail request (SESV2 API), but the emails don't show an unsubscribe linkCan anyone help?Thank youSamFollowComment"
AWS SES: Using {{amazonSESUnsubscribeUrl}} in HTML email but unsubscribe link not showing in emails
https://repost.aws/questions/QUCStgSjodSBycqqo7zkXxDA/aws-ses-using-amazonsesunsubscribeurl-in-html-email-but-unsubscribe-link-not-showing-in-emails
false
"I've experienced 3 System check failures on my LightSail Windows Server instance over the past 18 months. The system check failure lasts for many hours and then according to the FAQ pages (https://aws.amazon.com/premiumsupport/knowledge-center/lightsail-instance-failed-status-check/) it basically means that the underlying hardware has failed and the instance need to be stopped and restarted to migrate to new hardware. I do this manually but it leads to a lot of downtime (especially since LightSail takes upto 1 hour to stop the instance when there's a System check failure).The resource utilization up to the point of failure is extremely low (see image): https://www.dropbox.com/s/a882b3nbu8roauu/Lightsail.jpgDoes anyone know why my LightSail instances keep having System check failures and what I can do to avoid it?More importantly, is there anyway to have Amazon automatically Stop and Start the failed instance if the System Check Failure continues for more than X minutes/hours?FollowCommentRoB a year agoInstance status checks might also fail due to over-utilization of resources, have you ruled out this aspect?ShareRBoy a year ago@RoB yes, this instance has an average utilization of under 0.5% CPU right up to the point of failure and similarly low for network/disk activity, it's an IIS server handing a few requests per minute. Plus whenever it has a system check failure, it continues in this state for 3+ hours and the only solution is to stop (which takes up to 45 minutes whenever it fails) and then start; then it's good for the next 6 months or so until the next system check failure.Share"
Multiple system check failures over past 18 months - how to automatically stop/start instance?
https://repost.aws/questions/QUYSVvsj3tQHm-pVxFxCdMCA/multiple-system-check-failures-over-past-18-months-how-to-automatically-stop-start-instance
false
"0Hello RBoy,I understand that your Lightsail instance keeps failing System checks now and then. After that, you have to manually stop and start your instance to migrate it to a new host. However, you would like to automate the process of stopping and starting your instance in the case where System failure happens.I suggest you look at your system logs to check what is causing your Lightsail instances to have System check failure. The logs will reveal an error that can help you troubleshoot the issue.To automatically stop and start your instance you can use a Lambda function and CloudWatch Events to trigger these actions. CloudWatch automatically manages a variety of metrics for standard EC2 instances, however, the metrics collected in Lightsail are by default not visible in the CloudWatch dashboard. With that being said, you will have to do the following to get your Lightsail metrics in CloudWatch:Create an IAM user with the necessary permissions to submit the CloudWatch metrics data collected from the Lightsail instance.Installing the CloudWatch Agent on your Lightsail.Configuring the CloudWatch Agent to use the IAM user when submitting data to CloudWatchBelow is a sample code you can use to schedule the stop of the Instance:import boto3region = 'us-west-1'client = boto3.client('lightsail', region_name='region')def lambda_handler(event, context): client.stop_instance( instanceId='ID-OF-YOUR-LIGHTSAIL-INSTANCE')A sample code you can use to schedule the start of the Instance:import boto3region = 'us-west-1'client = boto3.client('lightsail', region_name='region')def lambda_handler(event, context): client.start_instance( instanceId='ID-OF-YOUR-LIGHTSAIL-INSTANCE')For region, replace "us-west-1" with the AWS Region that your instance is in and replace 'ID-OF-YOUR-LIGHTSAIL-INSTANCE' with the ID of the specific instance that you want to stop and start.I hope that this information will be helpful.CommentShareCebianswered a year agoRBoy 10 months agoThanks @Cebi. How does one pull the system logs for LightSail instances? The link you provided is for EC2 instances.Share"
I like to remove unused security groups. Also need to know which security groups are associated with my EC2 without Going one by one on EC2 instances. Any command or solution to make it manageable ?FollowComment
Security group association to EC2 instances
https://repost.aws/questions/QUfYwUCC43Ql2jZTJ2r4MyOg/security-group-association-to-ec2-instances
true
"0Accepted AnswerThere is actually a simple way to see the associations.https://aws.amazon.com/premiumsupport/knowledge-center/ec2-find-security-group-resources/Run the following command in the AWS CLI to find network interfaces associated with a security group based on the security group ID:aws ec2 describe-network-interfaces --filters Name=group-id,Values=<group-id> --region <region> --output jsonThe output of this command shows the network interfaces associated with the security group.Review the output.If the output is empty similar to this example, then there are no resources associated with the security group:{"NetworkInterfaces": []}If the output contains results, then use this command to find more information about the resources associated with the security group:aws ec2 describe-network-interfaces --filters Name=group-id,Values=<group-id> --region <region> --output json --You can also see from the console :Copy the security group ID of the security group that you're investigating.In the navigation pane, choose Network Interfaces. Paste the security group ID in the search bar.CommentSharemojtothanswered 3 months ago0You can use AWS Firewall Manager to manage your security groups at scale, see this blog post.CommentShareVincentanswered 3 months ago0Hi Sarah,You could look into AWS Config rule: https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-aws-delete-ec2-security-group.htmlThat rule will delete all unused Sec groups. Hope it helps!CommentShareEXPERTalatechanswered 3 months ago"
Hello!Can we use an existing toll-free number purchased through OpenPhone?ThxFollowComment
existing toll free number
https://repost.aws/questions/QUuA496D_wRguR-9Itj3-zkA/existing-toll-free-number
false
0According to the FAQ in the documentation you cannot port numbers from external providers.CommentShareRobert Loveanswered a month ago
"To solve the n + 1 problem that graphql has when querying nested data or a collection, Dataloader was introduced.https://github.com/graphql/dataloaderFrom AWS docs:GraphQL ProxyA component that runs the GraphQL engine for processing requests and mapping them to logical functions for data operations or triggers. The data resolution process performs a batching process (called the Data Loader) to your data sources. This component also manages conflict detection and resolution strategies.https://docs.aws.amazon.com/appsync/latest/devguide/system-overview-and-architecture.htmlHow do I implement a Dataloader with Appsync when using Serverless Aurora as a datasource? Is this possible with just using apache velocity templates without using a lambda?It seems that you can use a BatchInvoke on lambda. But that has a hard limit of 5 items in each batch.FollowCommentJonny a year agoThe hard limit no longer exists! You can set it from 0 to 2000.Share"
How to implement a Dataloader with Appsync when using Serverless Aurora as a datasource?
https://repost.aws/questions/QU28tid5_fRKysGJhVnNV_kw/how-to-implement-a-dataloader-with-appsync-when-using-serverless-aurora-as-a-datasource
false
"Hi,We have a manual setup redis ec2 instance that is maxing out the baseline bandwidth allocated for the instance, but we do not need higher CPU/Mem for the instance, what are our options? We aren't able to use ElasticCache it does not fit our budget range.ubuntu@ip-10-0-0-98:~$ ethtool -S ens5NIC statistics: tx_timeout: 0 suspend: 0 resume: 0 wd_expired: 0 interface_up: 2 interface_down: 1 admin_q_pause: 0 bw_in_allowance_exceeded: 333 bw_out_allowance_exceeded: 303473Also, there seems to be not where to lookup what this invisible network "credit balance" unlikely CPU, EBS disk balance?FollowComment"
Increaese instance bandwidth without changing instance type?
https://repost.aws/questions/QUTFvQUwPaTriX1x6Oab8FbQ/increaese-instance-bandwidth-without-changing-instance-type
true
"1Accepted AnswerThe available network bandwidth of an instance depends on the number of vCPUs that it has. The EC2 user guide has links describing the performance of different instance types. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-network-bandwidth.htmlYou could also consider scaling horizontally, by adding an instance. This would also give you an opportunity to use multiple availability zones and remove a single point of failure.CommentShareJeremy_Sanswered a year agoFreedom_AWS a year agoHi, yes, we have considered all of these, and would like to avoid doubling the current cost with the horizontal scaling since the only bottleneck is the network bandwidth.Additionally, there seems to be no way to see the network bandwidth credit stats unlikely the CPU or disk credits?ShareJeremy_S a year agoThere is no way to change the network bandwidth without increasing the instance size.You could review the link above to see if there is a similar instance size that offers more bandwidth. For example, a T3 instead of a T2.ShareFreedom_AWS a year agoWe have now split the traffic to two servers now and yet still observing bw_in_allowance_exceeded: 31 bw_out_allowance_exceeded: 10012While we are maximum less than 250Mbps on a .5Gbps baseline on each machine, this doesn't make much sense...Sharemlissner a year agoI have a pretty similar question over here with some interesting analysis if you're interested: https://repost.aws/questions/QUv105xDmfQMGUiBbfeYW-iQ/elasticache-shows-network-in-and-out-as-exceeded-but-howShare"
"On the "modify" page, the "DB engine version" list only shows the existing version, I can't select a version to upgrade to.https://stackoverflow.com/q/75319524/924597FollowCommentiwasa EXPERT4 months agoHi, @Shorn.Can you share a screenshot of the management console?Normally you should be able to select the new engine version.ShareShorn 4 months agoWhen I came back the next day (this morning) - the console now has the DB engine version options I expected.Over the course of trying to diagnose the issue yesterday, I took a lot of different actions, so I've no idea what (if any) caused the problem to go away.I tried to post my notes here but they're too long and I'm not willing to spend time editing them for this forum.Share"
How do I change my RDS Postgres engine version from the AWS console?
https://repost.aws/questions/QU4zuJeb9OShGfVISX6_Kx4w/how-do-i-change-my-rds-postgres-engine-version-from-the-aws-console
false
"Hi,Our instance i-01e1a29d2d2b1c613 just rebooted itself at 23:50:03 UTC and the machine was not restarted until 00:42:57 UTC.From /var/log/syslog:May 9 23:50:03 usanalytics systemd[1]: Started Session 916653 of user ubuntu.May 10 00:42:57 usanalytics rsyslogd: [origin software="rsyslogd" swVersion="8.16.0" x-pid="1158" x-info="http://www.rsyslog.com"] startWe didn't issue the reboot. We have checked a lot of logs and we didn't find anything. Can somebody explain why this happened?ThanksAlfredoEdited by: alfredolopez on May 10, 2019 2:59 AMFollowComment"
EC2 instance rebooted by itself
https://repost.aws/questions/QUJTK5pEciRBeQjfagKZvXDw/ec2-instance-rebooted-by-itself
true
"0Accepted AnswerHi Alfredo,I can confirm due to unforeseen circumstances the underlying hardware that your EC2 Instance was running on experienced a transient underlying hardware issue. As a result, it was rebooted as part of the recovery process, apologies for any inconvenience this may have caused. You can review this via the System Status Check Cloudwatch graph [1].For future references, I would highly recommend implementing Auto Recovery for your Instances. You can automatically recover Instances when a system impairment is detected. This feature recovers the instance on different underlying hardware and reduces the need for manual intervention. More information regarding this can be found in the below documentation:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.htmlIn summary, the reboot was caused as part of a recovery procedure when the underlying hardware experienced a transient issue. However, since you performed a stop/start your instance was migrated to a different underlying hardware. More information about what happens when you perform a stop/start can be found below:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html#instance_stopRegards,LoiyAWS[1] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-status-check.html#types-of-instance-status-checksCommentShareAWS-User-5964800answered 4 years ago"
"Hi All -New AWS user here...I followed the "Video on Demand on AWS Foundation" reference solution to transcode mp4 into HLS, DASH, and Smooth Stream output formats. I set up a Cloudfront distribution as well and configured it for Smooth Streaming as per AWS docs (https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/on-demand-video.html), made everything public, set up CORS etc.However, I'm still not able to play my Smooth Streaming content (HLS and DASH are fine). I tested the output video with the Microsoft test server and ExoPlayer on my streaming stick. Both players try to load the content, progress bar indicates the correct duration, but no video or audio.Exoplayer log: "com.google.android.exoplayer2.upstream.HttpDataSource$InvalidResponseCodeException: Response code: 400"If I try to play publicly available Smooth streams like: https://test.playready.microsoft.com/smoothstreaming/SSWSS720H264/SuperSpeedway_720.ism/manifest, Exoplayer successfully plays the stream.So, I'm guessing there's something I didn't set up properly with my MediaConvert/CloudFront setupMy Media Convert job:input: one mp4 (1920 x 1080, avc) -> two MS Smooth outputs (1280 x 720 and 640 x 360) avcDoes anyone have pointers on how to root cause this?Thanks.FollowComment"
Unable to play Smooth Stream
https://repost.aws/questions/QUfo58z2OsQH-lj_UOHPlZgg/unable-to-play-smooth-stream
false
"1Hi,Following are some of the steps you could to troubleshoot.When you create Smooth Stream content in Media Convert, make sure you select Microsoft Smooth Streaming as outputWhen you playback your Smooth Streaming content, enable the develop mode in your browser ( Select F12 for Chrome browser) and see what the error was when downloading the video and audio.Confirm following headers are added under behavior.Cache Based on Selected Request headers: Select WhitelistWhitelist headers:Access-Control-Allow-OriginAccess-Control-Request-HeadersAccess-Control-Request-MethodOriginConfirm Smooth Streaming is set to "Yes"Confirm that you have put clientaccesspolicy.xml or crossdomainpolicy.xml at the root of your distribution S3 bucket. Content can be found in https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/on-demand-video.htmlRegards,SamEdited by: samuelaws on Jan 7, 2021 10:34 PMCommentSharesamuelAWSanswered 2 years ago1Thank you samuelaws!!Adding the Whitelist headers did the trick. I'm able to play Smooth Streams now.Thank you so much for the help.CommentShareTestStreameranswered 2 years ago"
"I have an ACM certificate that is no longer needed. The Route 53 zone it linked to is no longer available so it can't auto-renew. I loaded it up to delete, but it has associated resources. The resources are load balancers in a different AWS account, one not owned by my company.How does this happen, and how do I disassociate it?FollowComment"
Why does my ACM cert give associated resources in another account?
https://repost.aws/questions/QUasFyDNVhSnKWsMpmC8_8eA/why-does-my-acm-cert-give-associated-resources-in-another-account
true
"1Accepted AnswerDeploying a Regional API endpoint creates an Application Load Balancer by API Gateway. The Application Load Balancer is owned by API Gateway service, not your account. The ACM certificate provided to deploy API Gateway is associated with the Application Load Balancer.Similarly, defining a custom endpoint for your domain in Amazon ElasticSearch Service (Amazon ES) creates an Application Load Balancer. The Application Load Balancer is owned by the ElasticSearch service, not by your account. The ACM certificate provided with creating the custom endpoint is associated with the Application Load Balancer.To remove the association of the ACM certificate with the Application Load Balancer given any of these use cases, please follow the guidelines outlined in our blogpost (https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-resources/) for the respective service.CommentShareSUPPORT ENGINEERSumukhi_Panswered a year agoEXPERTChaoran Wangreviewed a year ago"
"Hi all,I am new to AWS, and was trying to setup an IoT system where a lambda function is triggered whenever messages are received. Whenever I receive the message, however, the lambda function is not invoked (although I know the message has been received and parsed correctly). Looking at the logs, I see the message iot.amazonaws.com is unable to perform: lambda:InvokeFunction on resource: arn:aws:lambda:eu-west-3:216554370311:function:[...]. I have added permissions to AWS IoT as referenced here, and I am currently at a loss on what could be the cause of such behavior.Has any of you faced this error?ThanksFollowComment"
AWS IoT unable to invoke lambda function
https://repost.aws/questions/QUxlrRSdixQLi0D-DpNCmZOQ/aws-iot-unable-to-invoke-lambda-function
true
"0Accepted AnswerTurns out that if I launch the cli command reported here it works, I tried setting up everything through console and for whatever reason it didn't work as expected.CommentShareSandroSartonianswered a year ago"
"Hello,We have updated our RDS MySQL instance from 5.6.40 to 5.6.44 alongside the CA update.We have Performance insights enabled and we can see, for a 4 vCPU instance (r4.xlarge), spikes up to 107 AAS for synch/rwlock/innodb/btr_search_latch.We don't know if this is related to the upgrade or not has we didn't have Performance Insights enabled previously.What can be the cause, how can we decrease this?Thank youEdited by: mimss on Jan 20, 2020 6:42 AMEdited by: mimss on Jan 20, 2020 7:13 AMFollowComment"
Very high btr_search_latch after upgrading from 5.6.40 to 5.6.44
https://repost.aws/questions/QUQDmrJR4AQCKcpviAwswtEA/very-high-btr-search-latch-after-upgrading-from-5-6-40-to-5-6-44
false
"0That statistic is related to the innodb adaptive hash index https://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.htmlIt's a performance optimization which dynamically builds hash indices to supplement the regular btree indices whenever MySQL notices a certain pattern of queries. Unfortunately, in 5.6 it takes a global lock when doing this which causes spikes like you're seeing.You can turn it off by setting innodb_adaptive_hash_index to 0 in your db param groups. It's dynamically settable so you can experiment quite easily to see the effect with it on/off. If you upgrade to 5.7, you can tweak an additional parameter call innodb_adaptive_hash_index_parts which lets you split it up and avoid global locking.You can also read more about it here https://www.percona.com/blog/2016/04/12/is-adaptive-hash-index-in-innodb-right-for-my-workload/CommentSharethatsmydoinganswered 3 years ago"
"Hi,I have built a stack to deploy on my AWS Workspace, but after CDK Synth which does NOT throw any error, CDK Deploy fails to deploy the CF Template for the Stack. It throws an error. I am trying on a Windows machine.Could you have a look and suggest me anything that could work Plz. I am trying to learn AWS but keep getting stuck so it would be a big help if you can have a look at this Plz.Detailed Error Msg :[66%] success: Published <<>> (Removed the actual info)[09:55:03] [66%] cached: From cache C:\MyStack\cdk.out.cache<<>>.zip[09:55:03] [66%] upload: Upload s3://file-assets-bucket-dev/Test.zip[100%] fail: Unexpected close tagLine: 94Column: 7Char: >❌ MyStack failed: Error: Failed to publish one or more assets. See the error messages above for more information.at publishAssets (C:\aws-cdk\lib\index.js:374:73647)at processTicksAndRejections (node:internal/process/task_queues:96:5)at async CloudFormationDeployments.publishStackAssets (C:\aws-cdk\lib\index.js:381:87962)at async CloudFormationDeployments.deployStack (C:\aws-cdk\lib\index.js:381:84233)at async deployStack2 (C:\aws-cdk\lib\index.js:383:145458)at async C:\aws-cdk\lib\index.js:383:128776at async run (C:\aws-cdk\lib\index.js:383:126782)❌ Deployment failed: Error: Stack Deployments Failed: Error: Failed to publish one or more assets. See the error messages above for more information.at deployStacks (C:\aws-cdk\lib\index.js:383:129083)at processTicksAndRejections (node:internal/process/task_queues:96:5)at async CdkToolkit.deploy (C:\aws-cdk\lib\index.js:383:147507)at async exec4 (C:\aws-cdk\lib\index.js:438:51799)[09:55:24] Reading cached notices from C:.cdk\cache\notices.jsonStack Deployments Failed: Error: Failed to publish one or more assets. See the error messages above for more information.[09:55:24] Error: Stack Deployments Failed: Error: Failed to publish one or more assets. See the error messages above for more information.at deployStacks (C:\aws-cdk\lib\index.js:383:129083)at processTicksAndRejections (node:internal/process/task_queues:96:5)at async CdkToolkit.deploy (C:\aws-cdk\lib\index.js:383:147507)at async exec4 (C:\aws-cdk\lib\index.js:438:51799)List of debugging that I have already tried so far and it has NOT worked yet , are listed below.One of my colleagues imported my stack and ran CDK synth and CDK Deploy for my stack through his Macbook and it worked fine, i.e. it created a CloudFormation template for my stack on the workspace successfully.So, I deleted Virtual Env on my Windows machine , deleted CDK.out , Deleted CDK.context.json and re-created Virtual Env again and tried CDK deploy but no success, it still failed with same error.Also installed latest version of CDK using npm install -g aws-cdk@latestTried running CDK DEPloy(after CDK synth) and got stuck with same error included belowAlso, tried to generate a Log file using cdk deploy --verbose > log.txt but Log file is empty(i.e. 0 KB size)Also ran this command , cdk deploy --no-previous-parametersExpected result : CDK Deploy should run successfully for my stack and deploy a CloudFormation Template(as per the Stack I have created) to the AWS WorkspaceFollowComment"
"Unable to run "CDK Deploy" on Windows 10 , it throws an error on running CDK Deploy"
https://repost.aws/questions/QUetCqMwMrQtCWsLGxgOD1mQ/unable-to-run-cdk-deploy-on-windows-10-it-throws-an-error-on-running-cdk-deploy
false
"Having a real hard time getting data into Dynamo using Step Functions. The data flow is:App Flow from Salesforce to S3Event Bridge watching the S3 bucketMap the data from S3 and store in Dynamo.This almost works. Whenever App Flow gets new data from SalesForce, the step function is invoked. After the data is pulled from S3 (using GetItem), the next step is a Lambda in C#. This Lambda just deserializes the JSONL, populates a C# model for each item, and returns an array of the model.Next step is a Map step that passes each item in the Array to DynamoDB PutItem.Everything works, until you try to write a Boolean. Strings and Numbers are written fine, but it blows up if it sees a boolean. DynamoDB won't convert the boolean value.Things I have tried that didn't work:Write the entire model object to Dynamo - blows up on bool valuesUpdate PutItem config to map each field - blows up if there are any Nulls (nulls get dropped when the return value from the Lambda is serialized into JSON)Serializing the values back into a JSON string inside the Lambda - blows up on bool valuesReturning a list of DynamoDB Documents - blows up because each item in the document gets serialized as an empty objectBypassing the Lambda altogether and try passing the JSONL to the Map step - blows up because it's not an arrayWhen trying write Bool values, the error is "Cannot construct instance of com.amazonaws.services.dynamodbv2.model.AttributeValue (although at least one Creator exists): no boolean/Boolean-argument constructor/factory method to deserialize from boolean value (false)"I can't see any obvious way to get this working, unless we convert every item to a string, which causes problems reading the model back out later, as we lose type information and makes deserializing the model difficult.FollowComment"
Coordinating Step functions from App Flow -> Event Bridge -> DynamoDB
https://repost.aws/questions/QUdHNO4lHJRcWpE7oORkeVpA/coordinating-step-functions-from-app-flow-event-bridge-dynamodb
false
"0You did not show how your code looks like, but the item should be in the following format "AttributeName": {"BOOL": False|True}. Is that what you are doing?CommentShareEXPERTUrianswered a year ago0I cannot post the source-code. But in pseudocode, here is what it is doing: // deserialize objects var jsonReader = new JsonTextReader(new StringReader(payload)) { SupportMultipleContent = true }; JsonSerializerSettings jss = new JsonSerializerSettings(); jss.ReferenceLoopHandling = ReferenceLoopHandling.Ignore; var jsonSerializer = new JsonSerializer(); // MyObject is the model List<MyObject> objects = new List<MyObject> while (jsonReader.Read()) { var item = jsonSerializer.Deserialize<MyObjecct>(jsonReader); objects.Add(doc); } return new Response() { Data = objects };The response gets serialized as JSON automatically in the step function. I tried changing the model to a DynamoDB Document type (a Dictionary<string,object>) which is what the DynamoDB SDK does, but that ends up returning the value of each item in the dictionary as an empty object when the Step Function serializer serializes the return value.You need to use a Lambda to transform the output of the S3 GetItem because the Map step doesn't seem to understand how to deserialize JSONL, which the GetItem outputs.It seems the only way to avoid problems with the Step Function serializer on the output of the Lambda is return a string that is the already JSON serialized object that has the DynamoDB type on each key in the JSON object (so it doesn't get broken by AWS's serializer), then deserialize it in the next step by using StringToJson. But that seems way too much overhead for something that should be straight-forward. It's less work to just have a lambda that directly writes to Dynamo using the AWS SDK, than to try and use the built-in DynamoDB PutItem step. This feels like it is defeating the purpose of step functions.On top of that, trying to find documentation on what to do when you receive that error message is very difficult. Almost all AWS documentation for DynamoDB is on using the SDK directly, which abstracts the JSON format that DynamoDB expects.CommentShareBill Hannahanswered a year ago0I would also like to point out this seems to be a bug. If you have only strings, you don't need to add the type information - all strings work.As soon as you have a number or a boolean, you need to add type information. However, the JSON format is limited to types allowed:ObjectArrayStringNumberBooleanI would expect that given the very limited allowed types in the JSON specification, you would be able to get away without providing type qualifiers to the JSON for flat JSON objects. The deserializer would be trivial:{ "name": "foo", "count": 1, "isValid" : true } // could be easily tranformed to the following by a deserializer{ "name": { "S": "foo" }, "count": { "N": "1" }, "isValid": { "BOOL": true }}But I cannot change the deserializer, I'm stuck with either having everything a string, or jumping through hoopsCommentShareBill Hannahanswered a year ago0Just a side observation, away from the main issue:Event Bridge watching the S3 bucketAppFlow is actually integrated with EventBridge, and each flow execution will emit event into EB in the default bus. More info: here.I'd recommend you rather subscribe to this events, and kick-start the step function that way, than watching for S3 PutObject events.AppFlow may produce several objects as part of a the ingest (subject to size of the data), and in this case, if you watch for PutObject on S3, you will invoke Step Function multiple times (for each object).With the recommended approach, you can run a single SF execution and loop through all S3 objects (ListObjects) stored as part of the flow execution.I hope this make sense.Kind regards,KamenCommentShareKamen Sharlandjievanswered a year ago0That is how it is configured - App Flow -> Event Bridge -> Step function invocation that gets passed the S3 objects.The issue isn't how the objects are passed to the step functions, or how many there are. The problem is how do you take these objects and write them to DynamoDB. You must convert the JSONL that is the result of the Event Bridge event. The objects are passed to the Step Functions as JSONL (stream of JSON objects with no separator). In order to process those objects, you need to:Convert JSONL to JSON Array (Can't use Dynamo batch write, since it is limited to 25 items per invocation)Add Dynamo Type Information to each field in each object (because if you have any field that's not a String, it won't write).Pass to Map step with a DynamoDB PutItem step insideThere is no "No Code" option for this flow. This is why I say it seems to defeat the purpose of the Step FunctionCommentShareBill Hannahanswered a year ago"
"I took a exam on today but pearson revoked my exam on the middle of the exam.Exam:SAA-C02 - AWS Certified Solutions Architect - Associate - English (ENU)Date:Monday, March 21, 2022Time:10:30 AM Japan Standard TimeThe supervisor told me in a chat during the exam that my face was off-camera and it is rule violation. Concentrating on the problem, I thought i cloud do. So since then, I've been consciously making sure that my face is projected on the screen. But after that, the exam supervisor revoked my exam, saying I had violated the rules. Sincierely, I didn't do anything wrong, so I wanted to chat and hear the reason, but I wasn't given that chance. It's been a lot of hard work in the last few days and I'm very upset with this system and the supervisor for taking away the opportunity to prove this hard work.I'd love to hear exactly what I did wrong. Or, if the process for canceling the exam is wrong, I hope you give me a chance to take the exam right away.FollowComment"
Pearson Vue terminated the exam without violation
https://repost.aws/questions/QUO-DH6aw5Sk63nGHMU6PPJA/pearson-vue-terminated-the-exam-without-violation
false
0HiI have fixed same issue on 16-July. Proctor told me the same thing. After completion test I was just reviewing some flagged questions.Can you please help me. do you got any update for it or not. how much time they have taken.**Thanks in advance. **CommentShareBharat Singhanswered 10 months ago
"I found this on DeviceFarm documentation: https://docs.aws.amazon.com/devicefarm/latest/developerguide/limits.htmlThere is no limit to the number of devices that you can include in a test run. However, the maximum number of devices that Device Farm will test simultaneously during a test run is five. (This number can be increased upon request.)How can I increase the number of devices? It doesn't specify where or how to request it...I basically need this to run more test in parallel using the metered payment plan. Right now even if we have multiple tests making requests on the test project in device farm, only 5 tests are executed at a time, the remaining are pending.FollowComment"
How can I increase the number of devices that you can include in a test run in DeviceFarm?
https://repost.aws/questions/QUiQg7o8u1T3GqMSWW6E8gCA/how-can-i-increase-the-number-of-devices-that-you-can-include-in-a-test-run-in-devicefarm
true
1Accepted AnswerHi.You can relax the cap from the us-west-2 service quartas.https://us-west-2.console.aws.amazon.com/servicequotas/home/services/devicefarm/quotasCommentShareEXPERTiwasaanswered a year ago
"Greetings AWS team,I was going to create a new cloudformation stack via the UI console and in chrome 91.0.4472.77, I'm getting an error in the dev console.jquery.min.js:2 Blocked form submission to 'https://cf-templates-1xuhfkxmzdev5-us-east-1.s3-external-1.amazonaws.com/' because the form's frame is sandboxed and the 'allow-forms' permission is not set.Tried uploading a yaml and also going through the template designer and both give the same error.Works fine in Firefox though.Cheers,BraydonEdited by: ats-org on Jun 10, 2021 11:15 AMFollowComment"
Creating a stack UI bug?
https://repost.aws/questions/QUeoju3RfdQiK9FtHWyIcgwg/creating-a-stack-ui-bug
false
0Well after a couple days it started working again so ignore this.CommentShareats-organswered 2 years ago
When I connect my mobile with OPPO A15 hotspot the cloudfront CDN urls are laoding late(25 seconds). The URL loads well in other wifi connection. At the same time other application such as Instagram and Youtube runs well in that OPPO A15 hotspot as well as random URL form internet loads well in that OPPO A15 hotspot connectivity.I don't know whether it is Glide library problem or Hotspot problem or URL problem.FollowComment
CloudFront URL taking longer time in specific wifi connection
https://repost.aws/questions/QUNzLfiZ2TS1mg1Ye8mDqGFg/cloudfront-url-taking-longer-time-in-specific-wifi-connection
false
"0In short, I don't think there's much that we can suggest here other steps to determine where the extra time is being taken.There are several things that happen during page load:The browser/operating system does a DNS lookup for the website. This resolves to CloudFront. Part of this process is determining the closest CloudFront point of presence (POP).The ISP/internet routes the traffic to the POP.The browser sets up a TCP session which might be for HTTP (port 80) or HTTPS (port 443).If using port 443 the encryption parameters are negotiated.The browser requests the resource (the initial page) from CloudFront.If the resource is not in cache it is retrieved from the origin.The browser then repeats this process (steps 1-6) for all other resources that are specified in the initial page.All of these things take time and additional time at each step can add up very easily. It's possible that JavaScript libraries may do additional work during steps 2-6.You can troubleshoot the time take for (1) using nslookup or dig. You may need to clear the DNS cache on your hotspot or computer in order to do a valid test. Steps 2-6 can be tested using tools such as curl or wget. By testing on various networks and with different computers you may be able to determine where the problem is and go from there.CommentShareEXPERTBrettski-AWSanswered 5 months agoAsit Dixit 5 months agoThe problem was in IPv6 when I truned of it in cloudfront it worked fine.Share"
"Hi, the error "The maximum number of addresses has been reached. (Service: AmazonEC2; Status Code: 400; Error Code: AddressLimitExceeded; Request ID: ce98daec-b6b3-424c-ae3e-aa73d32e15ba; Proxy: null)" keeps showing up while running "agc account activate". Could u give any clue to fix the issue? thanks.FollowComment"
Error: maximum number of addresses has been reached.
https://repost.aws/questions/QUeM8d3--ERCS5FiYM5xHNaQ/error-maximum-number-of-addresses-has-been-reached
true
"0Accepted AnswerThat error suggests you've run out of Elastic IP addresses, which by default are limited to 5 for an account but more can be requested. I can't see any mention of Elastic IPs in the agc doco though. Are you running "agc account activate" letting it create its own VPC or passing "--vpc" to tell it to use an existing custom one?CommentShareEXPERTskinsmananswered a year agoAWS-User-0032383 a year agoThanks skinsman, it worked through by using existing vpc.Share"
"I have an enormous amount of contacts with Chime. Is there a way to group contacts in a sub-file that can be titled by the user ie: HR, Dock, TOM?FollowComment"
"Chime Contact Groups, Department Groups, Sub-Folders"
https://repost.aws/questions/QUs05WzdOWRaaJQkoWkyqs1w/chime-contact-groups-department-groups-sub-folders
false
"0The closest you can do is to create chat rooms and add users specific to each department, etc. However, there is no native feature available to logically group users within Chime. Also, there is no feature to create folder like hierarchy for chat rooms or users.CommentShareTaka_Manswered 8 months ago"
"I can't see couple of my buckets. I am sure I have not deleted it, it certainly contain important files. How Can I found or recover it.FollowCommentskinsman EXPERT2 months agoSo are you looking in the S3 console? Can you see other buckets that you own, there's just a couple missing? Have you tried "aws s3 ls" in the AWS CLI?Share"
Bucket not found
https://repost.aws/questions/QUcQYZEpl7SVmPpEVtgFxg7w/bucket-not-found
false
"0Bucket don't go missing unless deleted. check belowMake sure you’re in the right region.Make sure you’re in the right account and Access.Go to the Cloudtrail console and check to see if there was a DeleteBucket call.If you recall file names you can use this short and ugly way to do search file names using the AWS CLI:aws s3 ls s3://your-bucket --recursive | grep your-search | cut -c 32-CommentShareAWS-User-3429271answered 2 months agoEXPERTalatechreviewed 2 months agolernky 2 months agoThanks , I will check it... I am novoice in AWSShare0There is nothing in cloudtrail , in buket section there is "Global" selected and commnad line is not returning anything.. Is there any technical way where I can raise a ticket or someone can help to locate the bucket ?CommentSharelernkyanswered 2 months ago"
"I am transforming my table by adding new columns using SQL Query Transform in AWS Glue Job studio.SQL aliases- studyExisting Schema from data catalog - study id, patient id, patient ageI want to transform the existing schema by adding new columns.new columns - AccessionNoTransformed schema - study id, patient id, patient age, AccessionNoSQL query - alter table study add columns (AccessionNo int)Error it gives-pyspark.sql.utils.AnalysisException: Invalid command: 'study' is a view not a table.; line 2 pos 0;'AlterTable V2SessionCatalog(spark_catalog), default.study, 'UnresolvedV2Relation [study], V2SessionCatalog(spark_catalog), default.study, [org.apache.spark.sql.connector.catalog.TableChange$AddColumn@1e7cbfec]I tried looking at AWS official doc for SQL transform and it says queries should be in Spark Sql syntax and my query is also in Spark Sql syntax.https://docs.aws.amazon.com/glue/latest/ug/transforms-sql.htmlWhat is the exact issue and please help me resolve.ThanksFollowComment"
SQL Query Transform | AWS Glue Job
https://repost.aws/questions/QUVGy8btY-SXuiQOK6xiqWcw/sql-query-transform-aws-glue-job
true
"0Accepted AnswerA DDL like that is mean to alter and actual catalog table, not an immutable view like is "study".You also have to add some content to the column (even if it's a NULL placeholder you will fill in later).Even better if you can set the value you need here using other columns.For instance:select *, 0 as AccessionNo from studyCommentShareGonzalo Herrerosanswered 3 months agoPrabhu 2 months agoPls accept apologies for the late response as I was busy with MVP deadlines.It actually worked. Thanks a lot @Gonzalo Herreros.So, a follow up question.when a data catalog table is getting transformed with SQL transform, does the catalog table always referenced as a View?ShareGonzalo Herreros 2 months agoThey are separate things, a DDL (ALTER TABLE) updates the table, all the other transformations just exists while you make the queryShare"
"Code works in glue notebook but fails in glue job ( tried both glue 3.0 and 4.0)The line where it fails is,df.toPandas().to_csv(<s3_path>,index=False)no detail message in glue logs2023-05-19 19:08:37,793 ERROR [main] glue.ProcessLauncher (Logging.scala:logError(73)): Unknown error from Python: Error Traceback is not available.Obviously the data frame is large as 500MB specifically, but it succeeds in glue notebook. Wondering if there is any subtle differences internally in glue notebook and glue job that is not obvious or some kind of bug.P.S: write to S3 using Databricks technology works too.FollowCommentrePost-User-0048592 8 days agoalso tried coalesce(1), still resulting in same error using glue jobShare"
Command failed with exit code 10
https://repost.aws/questions/QUXKTQq3jTQQWSXKdWERdUmA/command-failed-with-exit-code-10
true
"0Accepted AnswerMore likely this error can happen when dealing with large datasets while they move back and forth between Spark tasks and pure Python operations. Data needs to be serialized between Spark's JVMs and Python's processes. So, in this regard my suggestion is to consider processing your datasets in separate batches. In other words, process less data per Job Run so that the Spark-to-Python data serialization doesn't take too long or fail.I can also understand that, your Glue job is failed but the same code is working in Glue notebook. But, in general there is no such difference when using spark-session in Glue job or Glue notebook. To compare, you can run your Glue notebook as a Glue job. To get more understanding of this behavior, I would suggest you to please open a support case with AWS using the link hereFurther, you can try below workaround for df.toPandas() using below spark configuration in Glue job. You can pass it as a key value pair in Glue job parameter.Key : --confValue : spark.sql.execution.arrow.pyspark.enabled=true, --conf spark.driver.maxResultSize=0CommentShareSoma Sekhar Kanswered 6 days agorePost-User-0048592 6 days agoThanks that workedShareGonzalo Herreros 5 days agoBear in mind that's not optimal, you are still bringing all the data in the driver memory and disabling the memory safety mechanism by setting it to 0Share0Very likely you are running out of memory by converting toPandas(), why don't you just save the csv using the DataFrame API?, even if you coalesce it to generate a single file (so it's single thread processing), it won't run out of memory.CommentShareGonzalo Herrerosanswered 7 days agorePost-User-0048592 7 days agoTried that did not worked either. well i can try various other options, but I'm puzzled how the same code works in glue notebook without adding any extra capacity.Share0Excellent response. I was able to get around the issue by adding the spark configuration/Glue job parameter --conf mentioned. Thanks a lot.CommentSharerePost-User-0048592answered 6 days agoSoma Sekhar K 5 days agoGood to hear. Happy to help you.Share"
"I have noticed that Ordering data slows down performance sometimes from 300ms to 30,000ms when ordering data, for example in gremlin :g.V().hasLabel('user').range(1,20)would be much faster thang.V().hasLabel('user').order().by('out('orderForStore_14').values('avgOrderValue')).range(1,20)Queries we have are more complex than this as we filter out more things based on connections but it the same issue as the example above.I know that when ordering data we have to fetch all the data first then order them then only select the range, where as without ordering we just get the first items in the range in whatever order they are in. thats why its way faster. But how do we handle such issue ? What is the most efficient way to do it ?FollowComment"
What is the most efficient way to Order data in Neptune database ?
https://repost.aws/questions/QUysS8Kl46SYic-GnplpxHaw/what-is-the-most-efficient-way-to-order-data-in-neptune-database
false
"0Neptune already builds indices on every vertex, edge, and property in the graph [1]. That being said, those indices are not used for order().by() operations. Presently, order().by() operations require fetching the resultant vertices/edges, materializing the properties that you want to serialize out to the client, and then performing the ordering on the worker thread that is executing the query. So if the customer is fetching many results ahead of the order().by().limit(), the query is going to take a long time to execute.Alternatives to doing largeorder().by() operations:Fetch all of the results and perform the ordering in the app layer. As you mention above, it is much faster just to fetch the results vs. fetching the results and performing the ordering in the query.Build constructs into the data model to curtail the number of objects that need to be ordered. If ordering by date, you could create intermediate vertices for the lower cardinality constructs of a date (year, month, day). Then build a traversal that can take advantage of these vertices to limit what needs to be fetched before ordering.If ordering is a common task, they may want to leverage an external store that maintains ordering. It is not uncommon for customers to build an architecture leveraging Neptune Streams to push to Elasticsearch (and leveraging Neptune's Elasticsearch integration [2] with ordering) or using a sorted set in ElastiCache Redis (materializing the results as they are written to Neptune and building the sorted set from the changes as they are exposed via Neptune Streams) [3].(NDA/roadmap) Longer-term, the Neptune team is working on improvements for property value materialization, which should have a positive effect on ordering. There is presently no defined timeline for these changes, but likely that they would begin to appear mid-to-late 2021 at the earliest.[1] https://docs.aws.amazon.com/neptune/latest/userguide/feature-overview-data-model.html [2] https://docs.aws.amazon.com/neptune/latest/userguide/full-text-search.html [3] https://aws.amazon.com/blogs/database/capture-graph-changes-using-neptune-streams/CommentSharerePost-User-2054795answered 3 months ago"
"Hello all,Hope you are doing well!I'm working for one client who is trying to move his application to Saas mode.The client is streaming his own application via Appstream to his customers which are companies who buy the application and then have their employees use the applicationWe want to integrate the Appstream with AWS Managed AD but we have one doubt regarding the administration of the end-users.In fact, The client who is streaming the application does not want to manage the end-users for each customer and would like to delegate these tasks to the customers' companies who are buying the app, meaning each company manages its own users.The question: do you know how the technical support team from the end users company, can manage the active directory and do simple operations only for their own company, like users' creation, resetting passwords etc?Thank you in advance for your helpFollowComment"
User Management in Appstream integrated with AWS Active Directory
https://repost.aws/questions/QUzHhYdD6-TV2gkij2SzdSug/user-management-in-appstream-integrated-with-aws-active-directory
false
"Hi,I am migrating dashboard from one aws account to another aws account. In the target account , i want user to have access so that they can copy the dashboard to create new one.But i am not able to do this via API. Is there any api which can provide save As Previlages?FollowComment"
How to give saveAs priviliges for a dashboard to a user group using API
https://repost.aws/questions/QU7XHMb3wDTeC6L5v0lg4FRA/how-to-give-saveas-priviliges-for-a-dashboard-to-a-user-group-using-api
false
"Hi, I have been banging my head trying to get this working and cannot figure it out.I have an ECS fargate cluster in 2 private subnets. There are 2 public subnets with NatGWs (needed for the tasks running in Fargate). Currently I have S3 traffic going through the NatGWs and I would like to implement an S3 endpoint as "best practice". I have created CFN scripts to create the endpoint and associated security group. All resources are created and appear to be working. However I can see from the logs that traffic for s3 is still going through the NatGWs. Is there something basic that I have missed? Is there a way to force the traffic from the tasks to the S3 endpoints?The fargate task security group has the following egress: SecurityGroupEgress: - IpProtocol: "-1" CidrIp: 0.0.0.0/0Here is the script that creates the enpoint and SG: endpointS3SecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: "Security group for S3 endpoint" GroupName: "S3-endpoint-sg" Tags: - Key: "Name" Value: "S3-endpoint-sg" VpcId: !Ref vpc SecurityGroupIngress: - IpProtocol: "tcp" FromPort: 443 ToPort: 443 SourceSecurityGroupId: !Ref fargateContainerSecurityGroup # S3 endpoint endpointS3: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: 's3:*' Resource: '*' SubnetIds: - !Ref privateSubnet1 - !Ref privateSubnet2 VpcEndpointType: Interface SecurityGroupIds: - !Ref endpointS3SecurityGroup ServiceName: Fn::Sub: "com.amazonaws.${AWS::Region}.s3" VpcId: !Ref vpcThanks in advance.Regards, Don.FollowComment"
CFN - Advice for adding an S3 endpoint to private subnets for fargate task access
https://repost.aws/questions/QUKG3n-ZmGS0SiuJIxtCvYnw/cfn-advice-for-adding-an-s3-endpoint-to-private-subnets-for-fargate-task-access
true
"1Accepted AnswerInteresting! I didn't know about that S3 limitation but I see it's mentioned in https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html: "To create an interface endpoint for Amazon S3, you must clear Additional settings, Enable DNS name. This is because Amazon S3 does not support private DNS for interface VPC endpoints."I've used Interface endpoints for lots of services but not S3, as I've always stuck with the free Gateway endpoints for that as you've now done.CommentShareEXPERTskinsmananswered 4 months agoDon 4 months agoThanks skinsman. I think I got confused with the documentation and should have started with the Gateway rather that Application endpoint.Share0In AWS::EC2::VPCEndpoint you need to set PrivateDnsEnabled: true to enable the default AWS_managed Private Hosted Zone (PHZ) that will cause DNS resolution of the S3 service endpoint to go to your private IP address (for the VPCEndpoint) instead of the service's standard public IP address.The other alternative is to manage your own PHZ (AWS::Route53::HostedZone) which is what you need to do if you're sharing the interface endpoint across VPCs. See https://www.linkedin.com/pulse/how-share-interface-vpc-endpoints-across-aws-accounts-steve-kinsman/ for example.CommentShareEXPERTskinsmananswered 4 months agoDon 4 months agoHi skinsman, thanks for the quick response. Unfortunatley PrivateDNSEnabled is not available for s3 interface endpoints. I get the following error from CFN when building the stack with it set to true:Private DNS can't be enabled because the service com.amazonaws.eu-west-1.s3 does not provide a private DNS name.Regards, Don.Share0I converted the endpoint to Gateway and now it all works as expected. Thanks to skinsman for giving me the strength to go back into the documentation.Here is the updated CFN script for those of you who come across the same issue: # S3 endpoint security group endpointS3SecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: !Sub "Security group for S3 endpoint" GroupName: !Sub "inclus-s3-endpoint-sg" Tags: - Key: "Name" Value: !Sub "inclus-s3-endpoint-sg" VpcId: !Ref vpc SecurityGroupIngress: - IpProtocol: "tcp" FromPort: 443 ToPort: 443 SourceSecurityGroupId: !Ref fargateContainerSecurityGroup # S3 endpoint endpointS3: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: 's3:*' Resource: '*' RouteTableIds: - !Ref privateRouteTable1 - !Ref privateRouteTable2 VpcEndpointType: Gateway ServiceName: !Sub "com.amazonaws.${AWS::Region}.s3" VpcId: !Ref vpcCommentShareDonanswered 4 months ago"
In the AWS Managment account 1111111 I have enabled CloudTrail. All CloudTrail logs are sent to the S3 bucket XXXX in the Audit Account 2222222. This part of the configuration works fine.I am now trying to enable the CloudTrail logs to be sent CloudWatch in account 2222222. Because CloudTrail is configure at the Org level in account 1111111 but the logs are in an S3 bucket in account 222222 when i try to enable CloudWatch I get an error message saying There is a problem with the role policyHas anyone configure something like this before and if they have any idea and what the Role should look like ?FollowComment
Org level CloudTrail with CloudWatch
https://repost.aws/questions/QU_dJEItiWRzu-bb9eNiTtzQ/org-level-cloudtrail-with-cloudwatch
false
"0At this time, CloudTrail can only support sending logs to a CloudWatch log group in the same account. This is owing to the fact that CloudTrail doesn't support AWS Organizations delegated admin feature. An alternative solution would be to use Kinesis or Lambda to automate writing those CloudWatch logs to a log group in another account.Please look at the Centralized Logging reference architecture to see how your use case can be achieved using other services: https://aws.amazon.com/solutions/implementations/centralized-logging/CommentShareNoamanswered 8 months ago"
"Hi, i've created a VPN connection from GCP to AWS following this guide: https://blog.searce.com/connecting-clouds-aws-gcp-771eb25a2dc3 Then i've created a vm con GCP and i've tested the connection to AWS. On AWS i've a Elasticache instance which i can connect without problems, but if i try with the RDS instance i've:ERROR 2002 (HY000): Can't connect to MySQL server on '...' (115)Security group and firewall rules seems right in AWS and GCP. Both Elasticache and RDS instance are in the same VPC and the same subnets group. I also tried to set "Public access" on RDS without results.Some suggestions on that?ThanksFollowComment"
Multi cloud VPN connection to RDS
https://repost.aws/questions/QUgAz4cHoyRyiZgc9mwmB9ZQ/multi-cloud-vpn-connection-to-rds
false
"0Hi.Since it's on the same subnet as Elasticash, I don't think there's a network problem.[Confirmation on AWS side]Are the information such as the inbound port and source IP address of the RDS security group correct?[Confirmation on Google Cloud side]Can I create Cloud SQL for MySQL within Google Cloud and connect from a Google Cloud virtual machine?CommentShareEXPERTiwasaanswered 5 months agorePost-User-0939086 5 months agoOn AWS the security group rules seem correct (i tried with a 0.0.0.0/0 too), i tested the connection from the GCP instance to my backend on ECS with curl and it works.On GCP i've tested a VM with mysql and it works too. For now i can't test Cloud SQL instance because i need permissions and i'm waiting for the admin.Update: i've created an EC2 instance with mysql and i can connect on mysql from GCPShare"
"I'm reading the docuemntation here: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.htmlI'm confused because it indicates that the network firewall logs can send logs to S3 bucket; however, when it gives the example policy it has delivery.logs.amazonaws.com for the principal of the bucket policy.However, if I'm using network firewall wouldn't I have to use network-firewall.amazonaws.com as a service pricnipal instead? for the bucket policy?FollowComment"
Logging Network Firewall Logs to S3 bucket. What should I use for my Service Principal?
https://repost.aws/questions/QU0MadNkH1Sh-NscbgavXHKg/logging-network-firewall-logs-to-s3-bucket-what-should-i-use-for-my-service-principal
false
"1Hi,Here is a sample policy for your reference. Follow the sample from below page, you can consider to use delivery.logs.amazonaws.com as the Principle.https://docs.aws.amazon.com/network-firewall/latest/developerguide/logging-s3.html{ "Version": "2012-10-17", "Statement": [ { "Sid": "AWSLogDeliveryWrite", "Effect": "Allow", "Principal": {"Service": "delivery.logs.amazonaws.com"}, "Action": "s3:PutObject", "Resource": [ "arn:aws:s3:::log-bucket/flow-logs/AWSLogs/111122223333/*", "arn:aws:s3:::log-bucket/flow-logs/AWSLogs/444455556666/*" ], "Condition": {"StringEquals": {"s3:x-amz-acl": "bucket-owner-full-control"}} }, { "Sid": "AWSLogDeliveryAclCheck", "Effect": "Allow", "Principal": {"Service": "delivery.logs.amazonaws.com"}, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET1" } ]}CommentSharejcvipanswered 7 months ago"
"I am processing SES events with a lambda that receives the events via an SNS topic that is a destination for a configuration set. I have no problems receiving Send, Delivery, Bounce, Open and so on. My use case sends emails to a single address, with a bcc to a second address. My problem is I don't see a way of distinguishing the recipients for an Open event. By contrast, it is straightforward to distinguish the recipients for a Delivery event. In that case the SNS JSON event payload has a "delivery" key that contains a "recipients" array that lists the email addresses delivered to. In my case, two Delivered events, one for the "to" address and one for the "bcc" address. Perfect. For the "Open" event there is a corresponding "open" key in the payload. But it doesn't seem helpful. It contains a timestamp, a userAgent and an ipAddress. I don't know what the ipAddress refers to, but whatever it is, it seems unlikely to provide a robust way of determining which recipient opened the email. Can anyone help me figure out which recipient opened the email? TIA.FollowComment"
Unable to track SES/SNS Open events by recipient
https://repost.aws/questions/QU4CZXIyxrSuqGTFgbbQXatQ/unable-to-track-ses-sns-open-events-by-recipient
false
"Hi, I am wondering if it is possible to load an Real Time Operating System into the Amazon DeepRacer. However, the OS cannot be booted up due to security issues. The message was displayed on the external monitor connected to DeepRacer. Specifically, I followed the instructions from here. https://docs.aws.amazon.com/deepracer/latest/developerguide/deepracer-vehicle-factory-reset-instructions.html Usually, there is an option to disable secure boot in the BIOS setup, but I can not find it.FollowComment"
Loading a Real Time Operating System into the Amazon DeepRacer
https://repost.aws/questions/QUVwrZZoFdThu-9RspeobStQ/loading-a-real-time-operating-system-into-the-amazon-deepracer
false
"0Hello,We do not allow to disable secure boot due to security reasons on AWS DeepRacer.Thank you.CommentShareAWS-Pratikanswered a year ago"
"How to optimise the cost of lambda. we are getting data from AMS through lamdba.currently we created 3 lambda functions from each dataset like one function for sp_traffic , one fucntion for sp_conversion and one for budget_usage.And each function is have 128 mb of memory. How can we reduce the cost of the lambda.FollowComment"
How to optimise the cost of Lambda ?
https://repost.aws/questions/QUrw7PzVBITsOIt3SQuPCrMw/how-to-optimise-the-cost-of-lambda
false
"1Consider using a Compute Savings Plan.https://aws.amazon.com/savingsplans/compute-pricing/?nc1=h_ls"AWS Compute Optimizer" can be used to optimize Lambda performance.If you are over-performing, you can reduce costs by checking here and setting the appropriate performance.https://docs.aws.amazon.com/compute-optimizer/latest/ug/view-lambda-recommendations.htmlCommentShareEXPERTRiku_Kobayashianswered a month ago0In some situations 128MB maybe is not the cheapest configuration.Please try https://github.com/alexcasalboni/aws-lambda-power-tuningTo find the best configuration for your lambda.CommentShareCarlos Garcesanswered a month ago"
"When I try to access my websites lightsail servers on AWS or visitors try to access one of the websites, processes stay hanging forever and I get error message site cannot be reached and when I do windows diagnostics it says my computer configuration is fine but DNS server not responding. What could be the problem?FollowCommentOleksii Bebych 4 months agoPlease provide more details. What is your Lightsail configuration (instance, OS, network, application, etc.)?Was it working ever ?Share"
Accessing my website and its server on AWS taking forever
https://repost.aws/questions/QUf2QLIOdZSA-ursKgQ8hE-Q/accessing-my-website-and-its-server-on-aws-taking-forever
false
"0There are a few potential causes for this issue. One possibility is that the DNS server is not properly configured or is experiencing issues. Another possibility is that there is a problem with the network connection to the Lightsail server or the server itself. Additionally, it could be a problem with the firewall or security group settings in your AWS account. Without more information about your specific setup, it is difficult to provide a more specific diagnosis. It is recommend to check the Lightsail and AWS Console for any error or issues, also checking the DNS settings, connectivity to the server and firewall rules.CommentShareDivyam Chauhananswered 4 months agorePost-User-2660604 4 months agoThank you Oleksil and Divyam. Apparently just tried accessing the servers now and I was able but visitors still can't access one of the websites.My operating system is Linus. I have two websites. The servers are both currently running. One is Nginx running on NodeJS(website accessible) and the second is Apache running on WordPress (not accessible).I just checked my AWS health dashboard and it reports no issues.The second website using WordPress application is still not accessible to visitorsShare0If you have recurring periods of availability and performance issues, it most likely means your load is causing your host CPU to run in the burstable zone. A CPU may only run in its burstable zone for limited periods of time, meaning the instance can become less responsive if the load consistently requires the burst capacity. If that turns out to be the case, you will want to either spread the load across multiple instances, or consider using a larger instance.Lightsail provides documentation on instance health metrics here:https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-viewing-instance-health-metricsIn particular, the cpu utilization may be of interest in this situation:https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-viewing-instance-health-metrics#cpu-utilization-zonesCommentShareAWS-Andyanswered 4 months agoLords_777 4 months agoThank you Andy. This is rePost-User-2660604. Signed in as I AM User for convenience.It is not a recurring experience. Never had it and not significantly increased the content of my blog in the last 2 years save minor corrections here and there.Share0I'd like to express my appreciation to the contributors to this issue and the community as a whole. I rebooted the server concerned and the remaining problem which is that of website accessibility by visitors has been resolvedCommentSharerePost-User-2660604answered 4 months ago"
"Our instance i-052cca656183d431d showed running but nothing could connect. I ended up stopping (let is try for 10 -15 minutes), then forcing to stop it (stopped roughly 5 min later), and once it came back up I saw network issues in the log right before it crashedena: Reading reg failed for timeout. expected: req id[10] offset[88] actual: req id[57015] offset[88]ena: Reg read32 timeout occurredsystemd-networkd[24263]: ens5: Lost carriersystemd-networkd[24263]: ens5: DHCP lease lostand then a few minutes later no more logs until the reboot was successful.Is there an issue with Amazon network here?ThanksJonFollowComment"
"Server lost connection, logs show ena5 connection issues"
https://repost.aws/questions/QUoll1rb8pTkqH-vTRDAJ4gw/server-lost-connection-logs-show-ena5-connection-issues
false
"0Hi. At this point, I do not see any active issues with EC2 or any other related services which has been reported on the PHD.Link- https://health.aws.amazon.com/health/statusThere could be multiple elements if you are trying to reach the instance from the on-premise/AWS environment etc. Check local FW settings or the ISP side issues, if any.You can also check if there was any PHD notifications that you received in your email/registered account.Checking the instance at this moment using internal tools does not show any issues as per the health checks. It also shows it is running for 45 minutes, however, for any additional checks, I would recommend opening up a support case.CommentShareSUPPORT ENGINEERAWS-User-Chiraganswered 9 months ago"
"AWS has a long history where we do not deprecate AWS service functionality unless for security reasons or under unusual circumstances. In 2017, Adobe announced the end-of-life for Flash will be December 31, 2020. In addition to Adobe, many of the most widely used internet browsers are also discontinuing Flash support in 2020. As a result, Amazon CloudFront will no longer support Adobe Flash Media Server and will be deprecating Real-Time Messaging Protocol (RTMP) distributions by December 31, 2020.On December 31, 2020 all CloudFront RTMP distributions will be deleted and CloudFront will deny requests made to those previously existing endpoints. All RTMP workloads should begin migrating to a standard CloudFront Web distribution and use one of several HTTP streaming protocols such as HTTP Live Streaming (HLS), Dynamic Adaptive Streaming over HTTP (DASH), Microsoft Smooth Streaming (MSS), or HTTP Dynamic Streaming (HDS).To learn more about resources available to help migrate from RTMP to HTTP streaming, please refer to CloudFront's documentation on using Web distributions for streaming live and on-demand video and other useful tutorials available . During your migration, you can re-create your Web distribution to have similar cache configuration settings as your RTMP distribution.If you require support to change an apex domain from one distribution to another without disruption to your traffic, or have any other questions please create a support case with AWS Support.FollowCommentNguyen_M SUPPORT ENGINEERa year agoThis is an announcement migrated from AWS Forums that does not require an answerShare"
"[Announcement] RTMP Support Discontinuing on December 31, 2020"
https://repost.aws/questions/QUoUZgHZh7SEWlnQUPlBmVNQ/announcement-rtmp-support-discontinuing-on-december-31-2020
false
0[Announcement] Does not require an answer.CommentShareEXPERTIsraa-Nanswered 18 days ago
"AWS Resource Access Manager (AWS RAM) now supports customer managed permissions so you can author and maintain fine-grained resource access controls for supported resource types. AWS RAM helps you securely share your resources across AWS accounts, within your organization or organizational units (OUs), and with AWS Identity and Access Management (IAM) roles and users. With customer managed permissions, you can apply the principles of least privilege, or the minimum permissions required to perform a task.You can now define the granularity of your customer managed permissions by precisely specifying who can do what under which conditions for the resource types included in your resource share. For example, as a cloud security admin, you can author tailored customer managed permissions for Amazon Virtual Private Cloud IP Address Manager (IPAM) pools, which help manage your IP addresses at scale. Then the network admin can share the IPAM pools using the tailored permissions so that developers can assign IP addresses but not view the range of IP addresses other developer accounts assign. For granting access to sensitive actions such as viewing the IP address range in an IPAM pool, you can add conditions such as requiring the actions are performed by users authenticated using multi-factor authentication.Customer managed permissions are now available in all AWS Regions where AWS RAM is supported, including the AWS GovCloud (US) Regions.To learn more about customer managed permissions, see the AWS RAM User Guide. To get started with using AWS RAM to share resources, visit the AWS RAM Console.FollowComment"
AWS Resource Access Manager supports fine-grained customer managed permissions
https://repost.aws/questions/QULKRV0fcNQsi-oLkeqTVzrA/aws-resource-access-manager-supports-fine-grained-customer-managed-permissions
false
"Just some documentation show that should use FIPS when need to follow FIPS requirement. But less documentation about how to enable this through Java SDK when connect to S3.So here are my questions?We can confirm that TLS v1.2 is used when connected to S3, show I still need to use FIPS endpoint? Besides the SSL connection, what does the FIPS endpoint do exactly? Check tls version?Any detailed document if I need to use FIPS endpoint with Java S3 SDK 1.x.Thanks!FollowComment"
If it is still required to use FIPS endpoint when using tls1.2 communication with S3?
https://repost.aws/questions/QUGMLYrnNSSjGfVNaMAwmFtw/if-it-is-still-required-to-use-fips-endpoint-when-using-tls1-2-communication-with-s3
false
"1Hello,Greetings for the day!We can confirm that TLS v1.2 is used when connected to S3, show I still need to use FIPS endpoint? Besides the SSL connection, what does the FIPS endpoint do exactly? Check tls version?No if you are already using the TLS v1.2 it would not require to use FIPS endpoints. The update from AWS is that "TLS 1.2 WILL BE REQUIRED FOR ALL AWS FIPS ENDPOINTS BEGINNING MARCH 31, 2021" i.e. if you are using FIPS endpoints already then you have to also update to TLS v1.2. To help you meet your compliance needs, we’re updating all AWS Federal Information Processing Standard (FIPS) endpoints to a minimum of Transport Layer Security (TLS) 1.2. We have already updated over 40 services to require TLS 1.2, removing support for TLS 1.0 and TLS 1.1. Beginning March 31, 2021, if your client application cannot support TLS 1.2, it will result in connection failures. In order to avoid an interruption in service, we encourage you to act now to ensure that you connect to AWS FIPS endpoints at TLS version 1.2. This change does not affect non-FIPS AWS endpoints.Regarding the FIPS endpoints, FIPS (Federal Information Processing Standards) are a set of standards that describe document processing, encryption algorithms and other information technology standards for use within U.S. non-military government agencies and by U.S. government contractors and vendors who work with the agencies. FIPS 140-2, “Security Requirements for Cryptographic Modules,” was issued by the U.S. National Institute of Standards and Technology (NIST) in May, 2001. The standard specifies the security requirements for cryptographic modules utilized within a security system that protects sensitive or valuable data.[+] https://s3.amazonaws.com/smhelpcenter/smhelp940/classic/Content/security/concepts/fips_mode.htm[+] https://aws.amazon.com/compliance/fips/Any detailed document if I need to use FIPS endpoint with Java S3 SDK 1.x.Since, you are already using the TLS v1.2 so, it would not require you to use the FIPS endpoints however, if you want to use the FIPS endpoints then please note that only some AWS services offer endpoints that support Federal Information Processing Standard (FIPS) 140-2 in some Regions. Unlike standard AWS endpoints, FIPS endpoints use a TLS software library that complies with FIPS 140-2. These endpoints might be required by enterprises that interact with the United States government.To use a FIPS endpoint with an AWS operation, use the mechanism provided by the AWS SDK or tool to specify a custom endpoint. For example, the AWS SDKs provide an AWS_USE_FIPS_ENDPOINT environment variable.[+] FIPS endpoints - https://docs.aws.amazon.com/general/latest/gr/rande.html#FIPS-endpointsI was only able to found the document for the updated Java 2.x where we can setup the AWS_USE_FIPS_ENDPOINT environment variable in SdkSystemSetting (AWS SDK for Java - 2.18.16). Please refer the below document for more information.[+] https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/SdkSystemSetting.htmlCommentShareSUPPORT ENGINEERUjjawal_Sanswered 6 months agoxkkkkkkkkkk 6 months agoYes, agree with you all. I cannot find the document about how to use FIPS endpoints in SDK V1.x either. It would be great if you can provide some one..ShareUjjawal_S SUPPORT ENGINEER6 months agoHave you tried using the "withEndpointConfiguration" method when creating the client. Each AWS client can be configured to use a specific endpoint within a region by calling the withEndpointConfiguration method when creating the client.[+] https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-region-selection.html#region-selection-choose-endpointhttps://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/client/builder/AwsClientBuilder.EndpointConfiguration.htmlShare"
I have looked around and I can’t seem to find how often/if the Default Patch Baselines in SSM Patch Manager are updated. It seems to me they are pulled from an S3 bucket each time you run a scan but I can’t seem to find how often AWS is updating them and outside of going through the code myself I don’t see anything about which changes have been made.FollowComment
SSM Patch Manager default patch baseline updates?
https://repost.aws/questions/QUpgJqqbXiRjmmdxHirF_t8A/ssm-patch-manager-default-patch-baseline-updates
false
"0The Default Patch Baselines in SSM Patch Manager are updated by AWS on a regular basis, typically on a monthly basis. The updates are released as new versions of the Amazon Linux and Windows Server AMIs are published.When you run a patch scan, the latest available patch data is retrieved from the SSM Patch Manager service, which pulls the patch data from the S3 bucket. The patch data includes the latest patches for each supported operating system, as well as information about patch severity, installation priority, and other metadata.AWS recommends that you regularly update your Default Patch Baselines to ensure that you are applying the latest security patches and updates to your instances. You can also create custom patch baselines to specify your own patching criteria and schedules, if needed.CommentSharehashanswered a month ago"
Is there a way to set up alerts on WAF rules when BLOCKS from certain rulecrosses a minimum threshold?Please advise then we shall discuss implementation.FollowComment
aws waf Is there a way to set up alerts on WAF rules when BLOCKS from certain rulecrosses a minimum threshold?Please advise then we shall discuss implementation.
https://repost.aws/questions/QUw64o78EqRXOGlBSZBWclQg/aws-waf-is-there-a-way-to-set-up-alerts-on-waf-rules-when-blocks-from-certain-rule-crosses-a-minimum-threshold-please-advise-then-we-shall-discuss-implementation
false
"0Yes, WAF sends BlockedRequest metrics to CloudWatch. From CloudWatch you can then define alarms and actions to take when thresholds have been breached. See: Monitoring with Amazon CloudWatch.CommentShareEXPERTkentradanswered 6 months ago0The metric ** BlockedRequests** will be sent to CloudWatch for all the rules (Metric name = rule name) that are set to BLOCK and collectively for the whole Web ACL (Metric name = name of the Web ACL).Once a block action is performed, you can go to CloudWatch metrics console and navigate to the following:All ==> WAFV2 ==> Region, Rule, WebACLThere you will be able to see the Metrics for the Web ACL and the rules. You can then create Alarms for the individual *** BlockedRequest*** metric for when a threshold is breachedCommentShareAWS-User-3413787answered 21 days ago"
"Difference between Timed based and number based manifests in mpeg dash output?What are its advantages ?does hls also support this ?Edited by: Punter on Apr 5, 2021 7:49 AMFollowComment"
Difference between Timed based and number based manifests
https://repost.aws/questions/QUrqsl-A5lSzKdKsIyCDYjNg/difference-between-timed-based-and-number-based-manifests
false
"0Hi PunterI have answered your question below.Difference between Timed based and number based manifests in mpeg dash output?Can I ask what product you are referring to?The media attribute in the SegmentTemplate properties defines the URL where playback devices send segment requests. For number based manifest, the URL uses a $Number$ variable to identify the specific segment that is requested. When a playback device requests the segment, it replaces the variable with the number identifier of the segment.For time based manifest, uses the $Time$ variable instead of $Number$ in the URL of the media attribute.For your information, you can read our DASH media Attribute in SegmentTemplate for MediaPackage.https://docs.aws.amazon.com/mediapackage/latest/ug/segtemp-format-media.htmlWhat are its advantages ?It depends on what format your player supports. Number based manifest is more common then time based manifest.does hls also support this ?No, this is specific for DASH. HLS has a different manifest structure.SamCommentSharesamuelAWSanswered 2 years ago0akamai has some complications with number based manifest ? though they say,it supports but it isCommentSharePunteranswered 2 years ago0Looking at Akamai's Encoder guide, the example they provided is a DASH Number with Duration manifest.https://learn.akamai.com/en-us/webhelp/media-services-live/media-services-live-encoder-compatibility-testing-and-qualification-guide-v4.0/GUID-49A70F8E-7A5B-4D2F-8FF3-1EA079231C22.htmlIf you are having any issue streaming to Akamai with our AWS Elemental appliance products, you can raise a support case and we can follow up with you.Thanks,-SamCommentSharesamuelAWSanswered 2 years ago"
"After deploying (update) a Lambda for about 150 times in an automated way using CI, this suddenly fails since in eu-west-1 since 2021-09-14 ~16:17 CET.ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:eu-west-1:NNNNNNNN:function:XXXXXXXX The last successful deploy was 2021-09-14 ~16:17 CET.The 2 last successful deploys now fail too, with the same error.Details are described and followed-up in [claudia/issues/226](https://github.com/claudiajs/claudia/issues/226).The closest issue comparable to what I see is [Terraform aws_lambda_function ResourceConflictException due to a concurrent update operation #5154](https://github.com/hashicorp/terraform-provider-aws/issues/5154#issuecomment-423912491), where it says, in 2018,OK, I've figured out what's happening here based on a comment here: AWS has some sort of limit on how many concurrentmodifications you can make to a Lambda function.I am pretty sure now that something changed on the side of AWS, either in general, in for this instance.Does anybody have any clue? Did something change? How can this be mitigated?FollowComment"
ResourceConflictException: … An update is in progress for resource: …
https://repost.aws/questions/QUEv2F5au3RA6OmXXwF3hTgQ/resource-conflict-exception-an-update-is-in-progress-for-resource
false
"0This could be related to https://forums.aws.amazon.com/thread.jspa?messageID=949152&#949152CommentSharejandecaanswered 2 years ago0I'm having this same problem. It started yesterday and it only happens with one lambda function. Other lambda functions are deployed just fine.I also noticed that the ci/cd tool is using an old version of the AWS SDK (1.11.834), and if I deploy the code using AWS CLI (2.2.37) it works. Could this be related?Edited by: raulbarreto-delivion on Sep 16, 2021 7:11 AMCommentShareraulbarreto-delivionanswered 2 years ago0AWS are rolling out this changehttps://aws.amazon.com/blogs/compute/coming-soon-expansion-of-aws-lambda-states-to-all-functions/You need to put a check for the function state in between the update_function_code and the publish version callsMake sure the state is active before proceedinghttps://docs.aws.amazon.com/lambda/latest/dg/functions-states.htmlCommentShareStuartanswered 2 years ago"
"Does S3 trigger ObjectCreated event on destination buckets after replication is succeeded?Checked the event notification documentation and quickly skimmed through the replication documentation but I found nothing explicitly statedSaw this question with a similar inquiry which suggests that it does, but I want to be sureFollowComment"
Does S3 Object Replication trigger ObjectCreated event?
https://repost.aws/questions/QUWy4DeorXSkOOm5otiBZPyg/does-s3-object-replication-trigger-objectcreated-event
true
"1Accepted AnswerYes there will be an ObjectCreated:Put event generated. Here is an example of the event created from the destination bucket when an object is replicated to it:{ "Records": [ { "eventVersion": "2.1", "eventSource": "aws:s3", "awsRegion": "us-east-1", "eventTime": "2022-10-26T14:07:48.018Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "AWS:xxxxxxxxxxxxxxxxxxx:xxxxxxxxxxxxxxxx" }, "requestParameters": { "sourceIPAddress": "x.x.x.x" }, "responseElements": { "x-amz-request-id": "xxxxxxxxxxxxxxxxxx", "x-amz-id-2": "xxxxxxxxxxxxxxxxxxxxxxxxxx" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "test", "bucket": { "name": "dstBucket", "ownerIdentity": { "principalId": "xxxxxxxxxxxxxx" }, "arn": "arn:aws:s3:::dstBucket" }, "object": { "key": "FILE.JPG", "size": 79896, "eTag": "xxxxxxxxxxxxxxxxxxxx", "versionId": "xxxxxxxxxxxxxxxxx", "sequencer": "xxxxxxxxxxxxxxxxx" } } } ]}CommentShareEXPERTMatt-Banswered 7 months agorePost-User-1267709 7 months agoThank you!Share"
"I no longer need one of my two attached lightsail storage disks. I looked here (https://lightsail.aws.amazon.com/ls/docs/en_us/articles/detach-and-delete-block-storage-disks) and read that I should detach the lightsail disk prior to deleting it. I also read that I will continue to be charged for the disk after detaching it until I delete it.I detached the disk, but now it has disappeared from my console, and I see no way to "delete" it. Does my disk still need to be "deleted", and if so, how do I do that?FollowComment"
How to delete Lightsail disk after detaching it
https://repost.aws/questions/QUYEgJ5D4HQq-z8kfNWV6Y_Q/how-to-delete-lightsail-disk-after-detaching-it
false
"1If you have detached a Lightsail storage disk, it will continue to be charged until you delete it, even if it has disappeared from your console. To delete a detached storage disk in Lightsail, you can follow these steps:Log in to the Lightsail console.Click on the "Storage" tab.Click on the "Detached" tab to view all your detached storage disks.Select the storage disk you want to delete.Click on the "Actions" button and select "Delete" from the drop-down menu.Confirm that you want to delete the storage disk by clicking "Delete" in the confirmation window.Once you delete the detached storage disk, you will no longer be charged for it. Keep in mind that all data on the disk will be permanently deleted and cannot be recovered, so make sure you have backed up any important data before deleting the disk.CommentShareAWS_Guyanswered 3 months agonlk 3 months agoThanks AWS Guy. This worked. Yesterday, the detached disk was not showing up when I clicked on Storage. Today, the detached disk was there, and I was able to delete it.Share"
"Hi,I have configured a cloudwatch alarm based on an expression which involves 2 metrics.Expression = m2 - m1, where,m2: AWS/AutoScaling - GroupInServiceInstancesm1: ECS/ContainerInsights - TaskCountAlarm Config : Expression <= 5 for 1 datapoints within 1 minuteMissing data case is considered as threshold breach.The problem is that the alarm is not getting triggered as per the expectation. After the expression breaches the threshold(5), as per the alarm config mentioned above, it should trigger after a minute(1 datapoints within 1 minute) but the alarm gets triggered after a certain(inconsistent) amount of time which is causing the actions(autoscaling) associated to the alarm to be delayed. The delay ranges from 2 - 15 minutes.Please refer to this link for a screenshot.The blue line in the first graph denotes the expression value and red line the threshold. As can be seen, the expression crosses the threshold at 7:23 but the alarm gets triggered at 7:40. The In Alarm state(red bar) in the second chart is triggered after 17 minutes then it should have.Any help is really appreciated.FollowComment"
Alarm not getting triggered even if the metric crosses threshold
https://repost.aws/questions/QUHZtOdJp4TDilPt1c0Pfnwg/alarm-not-getting-triggered-even-if-the-metric-crosses-threshold
false
"1From the screenshot actually it looks like alarm is breached when your expression is greater than threshold, not lower.CommentShareA-Shevchenkoanswered a year agoDhruv Baveja a year agoI know but the expression is opposite. Here is a screenshot(https://ibb.co/Lnh0NfM) with the expression.Share1Hi Dhruv,Given the Alarm configuration and the screenshot that you have provided, they are not quite aligning with each other.What I can suggest is to look at the Alarm History, and check that particular StateUpdate happened at 7:40 as you mentioned to understand the reason of the triggering which could enlighten us why the alarm triggered. From the History of StateUpdate look for section starting with below for example:..."newState": { "stateValue": "ALARM", "stateReason": "Threshold Crossed: 1 out of the last 1 datapoints [42.7118644067809 (21/01/22 13:00:00)] was greater than or equal to the threshold (40.0) (minimum 1 datapoint for OK -> ALARM transition).",...This section will give you explanation on how the Alarm got triggered, and by what reason. The reason data also can provide you confirmation of the Alarm configuration of the threshold and the comparison operator.Further down information within stateReasonData like recentDatapoints, threshold, and evaluatedDatapoints sections will provide further details into the StateUpdate.Hope this helps to further troubleshoot your Alarm configuration and the state updates regarding ALARM state.CommentShareSUPPORT ENGINEERMunkhbat_Tanswered a year agoDhruv Baveja a year agoHi Munkhbat_T,Thanks for the detailed response.I am adding more details below from the history section. This is for a different timeframe(24/01/2022) as compared to the 1 mentioned in the original question.Here is the screenshot for alarm and metric value: https://ibb.co/s6RjBTH. As can be seen here, the alarm triggered at 4:12 when it was supposed to trigger at 3:49.Here is a screenshot from the history section: https://ibb.co/6gyjkgR for the same duration(bottom rows).Here is a screenshot of the state change data for the alarm triggering at 4:12 : https://ibb.co/7vLgPQR .Share"
"HelloThe tape gateway stores the data in S3, How can I see metrics for the S3 Bucket itself? You never specify which bucket to use when you create the VTL and tapesIn Cloudwatch I can see bytes uploaded, but I would like to know how much data is in the actual bucket, and how many request that has been made.I need to compare backups using the VTL vs using object storageFollowComment"
S3 Bucket for Tape Gateway??
https://repost.aws/questions/QUVO5YmPEpRP-vhkz6SwVXLA/s3-bucket-for-tape-gateway
false
"0The tape gateway stores data in S3 bucket owned by AWS Storage Gateway, so customer doesn't have access to the metrics for S3 bucket. Please note that the cost structure is different between File Gateway and Tape Gateway.Please refer to the following page.https://aws.amazon.com/storagegateway/pricing/CommentShareShashi-AWSanswered 4 years ago"
"How do I change my provisioned (in use) EFS storage class from a multi zone "Regional" setup to a "One zone" setup.(I did not find a clear option to do this on the EFS management console, or any documentation)FollowComment"
Change EFS storage classes availability from Regional to One zone
https://repost.aws/questions/QUA5rRppiORQaDpFFDhAB2ow/change-efs-storage-classes-availability-from-regional-to-one-zone
true
"2Accepted AnswerHello nadavkav,this setting cannot be changed after creation:https://docs.aws.amazon.com/cli/latest/reference/efs/update-file-system.htmlin the aws cli there isn't this optionThanksJoelCommentShareJoelanswered a year agonadavkav a year agoThank you Joel.So my workaround is:I have created a new EFS FS with "One zone" storage class, and I am syncing my data to this new storage.When the sync process is done, I will switch between them and delete the old one.Share"
"Hi Team,Writing to you regrading the exception received in instantiating the ml.g4d.xlarge instance which required for one of usecase, I am currently has 12 months free service aws account , when I tried to launch to notebook instance it throws following error."ResourceLimitExceededThe account-level service limit 'ml.g5.xlarge for notebook instance usage' is 0 Instances, with current utilization of 0 Instances and a request delta of 1 Instances. Please use AWS Service Quotas to request an increase for this quota. If AWS Service Quotas is not available, contact AWS support to request an increase for this quota."But I am able to launch c5,t2 instances successfully, can someone help me out how to get access to this and update limitations.FollowComment"
Sagemaker GPU Instance quota limits
https://repost.aws/questions/QURjQ3RHyCRqK6mxffN-HDWA/sagemaker-gpu-instance-quota-limits
false
"1Hello. The quota is different for different instance type. You can raise an increase of the quota for Notebook instance ml.g4dn.xlarge or ml.g5.xlarge via Service Quotas.Upon logging in the AWS Console, you can go here https://us-east-1.console.aws.amazon.com/servicequotas/home/services. Make sure you choose the right region (the above link defaults to N. Virginia region). Then type in "SageMaker" on the "AWS services" search bar on that page and click the only "Amazon SageMaker" in the result. Then find "ml.g5.xlarge for notebook instance usage" and "ml.g4dn.xlarge for notebook instance usage" categories. For each, when you click, it will show "Request quota increase" button. Click that button and start the process of requesting limit increase for those instances.CommentShareYudho Ahmad Diponegoroanswered 13 days ago"
"I would like to be able to perform local testing of my greengrass components before deployment. The issue is that my components normally use Greengrass IPC, meaning that I cannot perform local testing, as IPC messages cannot be sent/received as the local component test won't be happening in Greengrass.Is there a way to get around this ? Is there a method to mimic/test Greengrass IPC calls in a program, outside of a Greengrass deployment ?FollowComment"
Testing Greengrass Components Before Deployment with IPC
https://repost.aws/questions/QUtrWxHkNUQuauULjQMgco5Q/testing-greengrass-components-before-deployment-with-ipc
false
"0Hi. If you get your component into a unit test harness, you could mock the IPC calls and achieve a lot of automated test coverage. A very small example of IPC mocking:Code under test: https://github.com/awslabs/aws-greengrass-labs-component-for-home-assistant/blob/main/artifacts/secret.pyTests: https://github.com/awslabs/aws-greengrass-labs-component-for-home-assistant/blob/main/tests/test_artifacts_secret.pyIf you are prepared to deploy the component locally (on your developer machine) to a Greengrass instance, you could also achieve integration testing of the component (assuming the component doesn't have dependencies on hardware that is not available on your developer machine). You could:Use the Greengrass CLI to deploy your component locally (quickly, easily and repeatedly while developing).Configure your local Greengrass to interact with local client devices by deploying the MQTT broker (Moquette), MQTT bridge, Client device auth and IP detector components.Configure the MQTT bridge topic mapping to relay the appropriate topics from Moquette (LocalMqtt) to PubSub and PubSub to Moquette (LocalMqtt). This blog is also helpful.Use an MQTT client of your choice (such as mosquitto_pub and mosquitto_sub) to connect to Moquette as a client device, and do pubsub with your component via the MQTT bridge.You could automate integration testing by getting this into a test framework like Cucumber. And go further by getting this all into a CI/CD pipeline.CommentShareEXPERTGreg_Banswered a year ago"
"I have sagemaker xgboost project template "build, train, deploy" working, but I'd like to modify if to use tensorflow instead of xgboost. First up I was just trying to change the abalone folder to topic to reflect the data we are working with.I was experimenting with trying to change the topic/pipeline.py file like so image_uri = sagemaker.image_uris.retrieve( framework="tensorflow", region=region, version="1.0-1", py_version="py3", instance_type=training_instance_type, )i.e. just changing the framework name from "xgboost" to "tensorflow", but then when I run the following from a notebook:from pipelines.topic.pipeline import get_pipelinepipeline = get_pipeline( region=region, role=role, default_bucket=default_bucket, model_package_group_name=model_package_group_name, pipeline_name=pipeline_name,)I get the following errorValueError Traceback (most recent call last)<ipython-input-5-6343f00c3471> in <module> 7 default_bucket=default_bucket, 8 model_package_group_name=model_package_group_name,----> 9 pipeline_name=pipeline_name, 10 )~/topic-models-no-monitoring-p-rboparx6tdeg/sagemaker-topic-models-no-monitoring-p-rboparx6tdeg-modelbuild/pipelines/topic/pipeline.py in get_pipeline(region, sagemaker_project_arn, role, default_bucket, model_package_group_name, pipeline_name, base_job_prefix, processing_instance_type, training_instance_type) 188 version="1.0-1", 189 py_version="py3",--> 190 instance_type=training_instance_type, 191 ) 192 tf_train = Estimator(/opt/conda/lib/python3.7/site-packages/sagemaker/workflow/utilities.py in wrapper(*args, **kwargs) 197 logger.warning(warning_msg_template, arg_name, func_name, type(value)) 198 kwargs[arg_name] = value.default_value--> 199 return func(*args, **kwargs) 200 201 return wrapper/opt/conda/lib/python3.7/site-packages/sagemaker/image_uris.py in retrieve(framework, region, version, py_version, instance_type, accelerator_type, image_scope, container_version, distribution, base_framework_version, training_compiler_config, model_id, model_version, tolerate_vulnerable_model, tolerate_deprecated_model, sdk_version, inference_tool, serverless_inference_config) 152 if inference_tool == "neuron": 153 _framework = f"{framework}-{inference_tool}"--> 154 config = _config_for_framework_and_scope(_framework, image_scope, accelerator_type) 155 156 original_version = version/opt/conda/lib/python3.7/site-packages/sagemaker/image_uris.py in _config_for_framework_and_scope(framework, image_scope, accelerator_type) 277 image_scope = available_scopes[0] 278 --> 279 _validate_arg(image_scope, available_scopes, "image scope") 280 return config if "scope" in config else config[image_scope] 281 /opt/conda/lib/python3.7/site-packages/sagemaker/image_uris.py in _validate_arg(arg, available_options, arg_name) 443 "Unsupported {arg_name}: {arg}. You may need to upgrade your SDK version " 444 "(pip install -U sagemaker) for newer {arg_name}s. Supported {arg_name}(s): "--> 445 "{options}.".format(arg_name=arg_name, arg=arg, options=", ".join(available_options)) 446 ) 447 ValueError: Unsupported image scope: None. You may need to upgrade your SDK version (pip install -U sagemaker) for newer image scopes. Supported image scope(s): eia, inference, training.I was skeptical that the upgrade suggested by the error message would fix this, but gave it a try:ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.pipelines 0.0.1 requires sagemaker==2.93.0, but you have sagemaker 2.110.0 which is incompatible.So that seems like I can't upgrade sagemaker without changing pipelines, and it's not clear that's the right thing to do - like this project template may be all designed around those particular ealier libraries.But so is it that the "framework" name should be different, e.g. "tf"? Or is there some other setting that needs changing in order to allow me to get a tensorflow pipeline ...?However I find that if I use the existing abalone/pipeline.py file I can change the framework to "tensorflow" and there's no problem running that particular step in the notebook.I've searched all the files in the project to try and find any dependency on the abalone folder name, and the closest I came was in codebuild-buildspec.yml but that hasn't helped.Has anyone else successfully changed the folder name from abalone to something else, or am I stuck with abalone if I want to make progress?Many thanks in advancep.s. is there a slack community for sagemaker studio anywhere?p.p.s. I have tried changing all instances of the term "Abalone" to "Topic" within the topic/pipeline.py file (matching case as appropriate) to no availp.p.p.s. I discovered that I can get an error free run of getting the pipeline from a unit test:import pytestfrom pipelines.topic.pipeline import *region = 'eu-west-1'role = 'arn:aws:iam::398371982844:role/SageMakerExecutionRole'default_bucket = 'sagemaker-eu-west-1-398371982844'model_package_group_name = 'TopicModelPackageGroup-Example'pipeline_name = 'TopicPipeline-Example'def test_pipeline(): pipeline = get_pipeline( region=region, role=role, default_bucket=default_bucket, model_package_group_name=model_package_group_name, pipeline_name=pipeline_name, )and strangely if I go to a different copy of the notebook, everything runs fine, there ... so I have two seemingly identical ipynb notebooks, and in one of them when I switch to trying to get a topic pipeline I get the above error, and in the other, I get no error at all, very strangep.p.p.p.s. I also notice that conda list returns very different results depending on whether I run it in the notebook or the terminal ... but the conda list results are identical for the two notebooks ...FollowComment"
adjusting sagemaker xgboost project to tensorflow (or even just different folder name)
https://repost.aws/questions/QUAL9Vn9abQ6KKCs2ASwwmzg/adjusting-sagemaker-xgboost-project-to-tensorflow-or-even-just-different-folder-name
true
"1Accepted AnswerHi! I see two parts in your question:How to use Tensorflow in a SageMaker estimator to train and deploy a modelHow to adapt a SageMaker MLOps template to your data and codeTensorflow estimator is slightly different from XGBoost estimator, and the easiest way to work with it is not by using sagemaker.image_uris.retrieve(framework="tensorflow",...), but to use sagemaker.tensorflow.TensorFlow estimator instead.These are the two examples, which will be useful for you:Train an MNIST model with TensorFlowDeploy a Trained TensorFlow V2 ModelAs for updating the MLOps template, I recommend you to go through the comprehensive self-service lab on SageMaker Pipelines.It shows you how to update the source directory from abalone to customer_churn. In your case it will be the topic.P. S. As for a Slack channel, to my best knowledge, this re:Post forum now is the best place to ask any questions on Amazon SageMaker, including SageMaker Studio.CommentShareIvananswered 8 months agoregulatansaku 8 months agothanks Ivan for also taking a look at this question - very helpful indeed. That "self-service lab on Sagemaker Pipelines" is exactly what I was looking for, although it doesn't mention the error message that I encountered above, and now for some reason am encountering again in the one notebook that worked yesterday - intermittent errors are of course the most frustrating :-(I will work through that lab from scratch after lunch, but I feel that I am missing something conceptual about the process by which a sagemaker studio project gets deployed. Like that lab mentions pushing code the repo as a way to kick off a build, while the notebook itself seems to imply that one can kick of a deploy from the notebook itself.Is the problem perhaps that when I'm operating from within the notebook that it's only using the code that happens to be on the latest main branch? current feature branch?and why doesn't python setup.py build get the requirements in the right place for the pipeline? Maybe it will be doing that if I just commit the code to the right branch?I'll work through the lab in a fresh project after lunch and see where I get to ...ShareIvan 8 months agoHi, @regulatansaku.See my comment below.Like that lab mentions pushing code the repo as a way to kick off a build, while the notebook itself seems to imply that one can kick of a deploy from the notebook itself.These are just two ways of doing the same thing. From the notebook you can try the SageMaker pipeline it in "dev" mode. When you commit the code, it will trigger a CI/CD pipeline in AWS CodePipeline, which will run the same pipeline in the automated "ops" mode, without a need to have any notebooks up and running.For the requirements problem, I tried to answer in your other post.Shareregulatansaku 8 months agothanks @ivan - that's makes sense. Just for some reason I get these random intermittent errors in the notebook trying to get the pipeline, the: ValueError: Unsupported image scope: None. You may need to upgrade your SDK version (pip install -U sagemaker) for newer image scopes. Supported image scope(s): eia, inference, training.But so the workaround to avoid that issue appears to be to prefer to push the code to the main branch. It's great to have a way to avoid that craziness so big thanks for thatShareIvan 8 months agoHi, @regulatansaku.I'm glad that you've resolved your issue. As for the error about "Unsupported image scope" - check your code that you accidentally don't have sagemaker.image_uris.retrieve left anywhere. If you want to retrieve the TensorFlow image with this API, you need to specify either "training" or "inference" as a scope, because these two are slightly different images. But as I said before, you don't need to do that if you use the TensorFlow estimator, and not a generic Estimator.Share"
"I am trying to create an App Runner service using container image from ECR from AWS Console. When I submit my service creation request, I receive the following error, saying it's exceeding my service quota which is zeroI captured the network traffic via Chrome inspector and below are the request and responserequest:{"HealthCheckConfiguration":{"HealthyThreshold":1,"Interval":10,"Protocol":"TCP","Timeout":5,"UnhealthyThreshold":5},"InstanceConfiguration":{"Cpu":"1024","Memory":"2048"},"NetworkConfiguration":{"EgressConfiguration":{"EgressType":"DEFAULT"},"IngressConfiguration":{"IsPubliclyAccessible":true}},"ObservabilityConfiguration":{"ObservabilityEnabled":false},"ServiceName":"my-app-name","SourceConfiguration":{"AutoDeploymentsEnabled":false,"ImageRepository":{"ImageConfiguration":{"Port":"8080"},"ImageIdentifier":"my-account-id.dkr.ecr.us-west-2.amazonaws.com/my-ecr-registry:latest","ImageRepositoryType":"ECR"},"AuthenticationConfiguration":{"AccessRoleArn":"arn:aws:iam::my-account-id:role/service-role/MyIAMRoleName"}}}response:{"__type":"com.amazonaws.apprunner#ServiceQuotaExceededException","Message":"Exceeded service quota for customer my-account-id : 0."}I checked the service quota limit for all services involved in this request and none of them is zero.FollowComment"
Failed to create App Runner Service due to exceeding service quota of zero
https://repost.aws/questions/QURLKo4Y3PRCm1AeTfjUxe5w/failed-to-create-app-runner-service-due-to-exceeding-service-quota-of-zero
false
"1Hi,Thanks for reporting the issue. We are aware of this and the team is actively working to resolve the issue. By the end of day today we will be deploying the fix to all prod regions.CommentShareHarianswered 6 months agorePost-User-3387005 6 months agoThanks Hari. Please give me an update when the fix is rolled out in west-us-2.Share1Hi, fix have been deployed to all regions now, could you verify ? ThanksCommentSharecsunaanswered 6 months agorePost-User-3387005 6 months agoit's working in west-us-2. thanks for the quick fix!Share0Hello,I have the same issue. On a fresh account I can create a service from the console UI but as soon as I deployed one with Terraform bot ui and Terraform are giving me the same error.Do you have any ETA on the fix ?ThanksCommentSharerePost-User-3009542answered 6 months agocsuna 6 months agoHi Fix have been deployed to all prod region now, could you verify ? ThanksShare0Hi @Raj,Can you please drop in the screen shot of your error message, we will be happy to look into this. ThanksCommentShareHarianswered 5 months agoRaj 5 months agoI have attached the screen shotShare0Hi,The fix is deployed to all regions, so we shouldnt see this issue for any customers. thanks for the patienceCommentShareHarianswered 6 months ago0I am facing this issue today since evening in us-east 1. Is this an ongoing issue again?CommentShareRajanswered 5 months ago0Hi @Hari. This is image of the error that I have faced on us-east-1, while creating a AppRunner service from AWS ConsoleCommentShareRajanswered 5 months ago"
"Hello,I'm back on trying to get an Android Clang Release build up and running, and I'm running into a linker error:c:\Amazon\Lumberyard\1.11.1.0\dev\Code\SDKs\AWSNativeSDK\lib\android\ndk_r12\android-21\armeabi-v7a\clang-3.8\Release/libaws-cpp-sdk-cognito-identity.a(ub_COGNITO-IDENTITY.cpp.o):/var/lib/jenkins/jobs/AndroidArm32Sta/workspace/aws-sdk-cpp/_build_android_arm_32_static_release/aws-cpp-sdk-cognito-identity/ub_COGNITO-IDENTITY.cpp:function Aws::CognitoIdentity::Model::RoleMappingTypeMapper::GetRoleMappingTypeForName(std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, Aws::Allocator<char> > const&): error: undefined reference to 'Aws::Utils::EnumParseOverflowContainer::StoreOverflow(int, std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, Aws::Allocator<char> > const&)'c:\Amazon\Lumberyard\1.11.1.0\dev\Code\SDKs\AWSNativeSDK\lib\android\ndk_r12\android-21\armeabi-v7a\clang-3.8\Release/libaws-cpp-sdk-cognito-identity.a(ub_COGNITO-IDENTITY.cpp.o):/var/lib/jenkins/jobs/AndroidArm32Sta/workspace/aws-sdk-cpp/_build_android_arm_32_static_release/aws-cpp-sdk-cognito-identity/ub_COGNITO-IDENTITY.cpp:function Aws::CognitoIdentity::Model::RoleMappingTypeMapper::GetNameForRoleMappingType(Aws::CognitoIdentity::Model::RoleMappingType): error: undefined reference to 'Aws::Utils::EnumParseOverflowContainer::RetrieveOverflow(int) const'c:\Amazon\Lumberyard\1.11.1.0\dev\Code\SDKs\AWSNativeSDK\lib\android\ndk_r12\android-21\armeabi-v7a\clang-3.8\Release/libaws-cpp-sdk-cognito-identity.a(ub_COGNITO-IDENTITY.cpp.o):/var/lib/jenkins/jobs/AndroidArm32Sta/workspace/aws-sdk-cpp/_build_android_arm_32_static_release/aws-cpp-sdk-cognito-identity/ub_COGNITO-IDENTITY.cpp:function Aws::CognitoIdentity::Model::AmbiguousRoleResolutionTypeMapper::GetAmbiguousRoleResolutionTypeForName(std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, Aws::Allocator<char> > const&): error: undefined reference to 'Aws::Utils::EnumParseOverflowContainer::StoreOverflow(int, std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, Aws::Allocator<char> > const&)'c:\Amazon\Lumberyard\1.11.1.0\dev\Code\SDKs\AWSNativeSDK\lib\android\ndk_r12\android-21\armeabi-v7a\clang-3.8\Release/libaws-cpp-sdk-cognito-identity.a(ub_COGNITO-IDENTITY.cpp.o):/var/lib/jenkins/jobs/AndroidArm32Sta/workspace/aws-sdk-cpp/_build_android_arm_32_static_release/aws-cpp-sdk-cognito-identity/ub_COGNITO-IDENTITY.cpp:function Aws::CognitoIdentity::Model::AmbiguousRoleResolutionTypeMapper::GetNameForAmbiguousRoleResolutionType(Aws::CognitoIdentity::Model::AmbiguousRoleResolutionType): error: undefined reference to 'Aws::Utils::EnumParseOverflowContainer::RetrieveOverflow(int) const'c:\Amazon\Lumberyard\1.11.1.0\dev\Code\SDKs\AWSNativeSDK\lib\android\ndk_r12\android-21\armeabi-v7a\clang-3.8\Release/libaws-cpp-sdk-cognito-identity.a(ub_COGNITO-IDENTITY.cpp.o):/var/lib/jenkins/jobs/AndroidArm32Sta/workspace/aws-sdk-cpp/_build_android_arm_32_static_release/aws-cpp-sdk-cognito-identity/ub_COGNITO-IDENTITY.cpp:function Aws::CognitoIdentity::Model::MappingRule::Jsonize() const: error: undefined reference to 'Aws::Utils::EnumParseOverflowContainer::RetrieveOverflow(int) const'c:\Amazon\Lumberyard\1.11.1.0\dev\Code\SDKs\AWSNativeSDK\lib\android\ndk_r12\android-21\armeabi-v7a\clang-3.8\Release/libaws-cpp-sdk-cognito-identity.a(ub_COGNITO-IDENTITY.cpp.o):/var/lib/jenkins/jobs/AndroidArm32Sta/workspace/aws-sdk-cpp/_build_android_arm_32_static_release/aws-cpp-sdk-cognito-identity/ub_COGNITO-IDENTITY.cpp:function Aws::CognitoIdentity::Model::RoleMapping::operator=(Aws::Utils::Json::JsonValue const&): error: undefined reference to 'Aws::Utils::EnumParseOverflowContainer::StoreOverflow(int, std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, Aws::Allocator<char> > const&)'c:\Amazon\Lumberyard\1.11.1.0\dev\Code\SDKs\AWSNativeSDK\lib\android\ndk_r12\android-21\armeabi-v7a\clang-3.8\Release/libaws-cpp-sdk-cognito-identity.a(ub_COGNITO-IDENTITY.cpp.o):/var/lib/jenkins/jobs/AndroidArm32Sta/workspace/aws-sdk-cpp/_build_android_arm_32_static_release/aws-cpp-sdk-cognito-identity/ub_COGNITO-IDENTITY.cpp:function Aws::CognitoIdentity::Model::RoleMapping::operator=(Aws::Utils::Json::JsonValue const&): error: undefined reference to 'Aws::Utils::EnumParseOverflowContainer::StoreOverflow(int, std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, Aws::Allocator<char> > const&)'c:\Amazon\Lumberyard\1.11.1.0\dev\Code\SDKs\AWSNativeSDK\lib\android\ndk_r12\android-21\armeabi-v7a\clang-3.8\Release/libaws-cpp-sdk-cognito-identity.a(ub_COGNITO-IDENTITY.cpp.o):/var/lib/jenkins/jobs/AndroidArm32Sta/workspace/aws-sdk-cpp/_build_android_arm_32_static_release/aws-cpp-sdk-cognito-identity/ub_COGNITO-IDENTITY.cpp:function Aws::CognitoIdentity::Model::RoleMapping::Jsonize() const: error: undefined reference to 'Aws::Utils::EnumParseOverflowContainer::RetrieveOverflow(int) const'clang++.exe: error: linker command failed with exit code 1 (use -v to see invocation)tl;dr:"undefined reference to 'Aws::Utils::EnumParseOverflowContainer::RetrieveOverflow(...)'""undefined reference to 'Aws::Utils::EnumParseOverflowContainer::StoreOverflow(...)'"Android Clang Debug works fine, runs on device, etc. Any help here would be MOST appreciated.Cheers!Follow"
Android Release - Link Error - Aws::Utils::EnumParseOverflowContainer
https://repost.aws/questions/QUFC4SF0WHR--XpJjs-yiFiA/android-release-link-error-aws-utils-enumparseoverflowcontainer
true
"0Accepted AnswerHi @REDACTEDUSERI was able to find the problem and it was indeed a link order issue with the AWS core library. Adding the following to Tools/build/waf-1.7.13/lmbrwaflib/lumberyard_sdks.py should patch the problem. @REDACTEDUSER@before_method('propagate_uselib_vars')def link_aws_sdk_core_after_android(self):platform = self.env['PLATFORM']if not ('android' in platform and self.bld.spec_monolithic_build()):returnif 'AWS_CPP_SDK_CORE' in self.uselib:self.uselib = [ uselib for uselib in self.uselib if uselib != 'AWS_CPP_SDK_CORE' ]self.uselib.append('AWS_CPP_SDK_CORE')Let me know if you are still seeing an issue after applying the patch.SharerePost-User-6730542answered 6 years ago0Oh man this looks fun -- fetching some answers for ya. Sorry for the delayed response!SharerePost-User-5738838answered 6 years ago0FWIW, I checked the shared and static libs - the symbols are there:(static)$ nm -g -C ./Debug/libaws-cpp-sdk-core.a | grep EnumParseOverflowContainer00000000 T Aws::CheckAndSwapEnumOverflowContainer(Aws::Utils::EnumParseOverflowContainer*, Aws::Utils::EnumParseOverflowContainer*)00000000 W Aws::Utils::EnumParseOverflowContainer* Aws::New<Aws::Utils::EnumParseOverflowContainer>(char const*)00000000 W Aws::Utils::EnumParseOverflowContainer::EnumParseOverflowContainer()00000000 W Aws::Utils::EnumParseOverflowContainer::~EnumParseOverflowContainer()00000000 W void Aws::Delete<Aws::Utils::EnumParseOverflowContainer>(Aws::Utils::EnumParseOverflowContainer*)EnumParseOverflowContainer.cpp.o:00000000 T Aws::Utils::EnumParseOverflowContainer::StoreOverflow(int, std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, Aws::Allocator<char> > const&)00000000 T Aws::Utils::EnumParseOverflowContainer::RetrieveOverflow(int) const$ nm -g -C ./Release/libaws-cpp-sdk-core.a | grep EnumParseOverflowContainer00000001 T Aws::CheckAndSwapEnumOverflowContainer(Aws::Utils::EnumParseOverflowContainer*, Aws::Utils::EnumParseOverflowContainer*)EnumParseOverflowContainer.cpp.o:00000001 T Aws::Utils::EnumParseOverflowContainer::StoreOverflow(int, std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, Aws::Allocator<char> > const&)00000001 T Aws::Utils::EnumParseOverflowContainer::RetrieveOverflow(int) const(shared)$ objdump -TC ./Debug/libaws-cpp-sdk-core.so | grep EnumParseOverflowContainer000c27c4 g DF .text 0000063c Base Aws::CheckAndSwapEnumOverflowContainer(Aws::Utils::EnumParseOverflowContainer*, Aws::Utils::EnumParseOverflowContainer*)000e8934 w DF .text 00000064 Base Aws::Utils::EnumParseOverflowContainer* Aws::New<Aws::Utils::EnumParseOverflowContainer>(char const*)0014a860 w DF .text 000000e4 Base Aws::Utils::EnumParseOverflowContainer::EnumParseOverflowContainer()0014aa50 w DF .text 0000005c Base Aws::Utils::EnumParseOverflowContainer::~EnumParseOverflowContainer()000e8998 w DF .text 00000044 Base void Aws::Delete<Aws::Utils::EnumParseOverflowContainer>(Aws::Utils::EnumParseOverflowContainer*)001a8dc8 g DF .text 000003dc Base Aws::Utils::EnumParseOverflowContainer::StoreOverflow(int, std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, Aws::Allocator<char> > const&)001a84c4 g DF .text 00000904 Base Aws::Utils::EnumParseOverflowContainer::RetrieveOverflow(int) const$ objdump -TC ./Release/libaws-cpp-sdk-core.so | grep EnumParseOverflowContainer00087c39 g DF .text 00000030 Base Aws::CheckAndSwapEnumOverflowContainer(Aws::Utils::EnumParseOverflowContainer*, Aws::Utils::EnumParseOverflowContainer*)000cd091 g DF .text 00000220 Base Aws::Utils::EnumParseOverflowContainer::StoreOverflow(int, std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, Aws::Allocator<char> > const&)000ccd75 g DF .text 0000031c Base Aws::Utils::EnumParseOverflowContainer::RetrieveOverflow(int) constSharerePost-User-0158040answered 6 years ago0Any ideas, @REDACTEDUSERSharerePost-User-0158040answered 6 years ago0Thanks for sharing this! I've sent this over to my peers to take a look. Will update you on answers ASAP!SharerePost-User-5738838answered 6 years ago0Any updates? I'd love to do some performance testing on mobile devices :(SharerePost-User-0158040answered 6 years ago0Hi @REDACTEDUSERIt looks as if the AWS core library isn't getting included in the linker command. Could you verify this by either looking at the task dump when the command fails or running your build command with "--zones=runner" included (note: you may need to be quick to capture the temp response file before it's deleted with this option)?SharerePost-User-6730542answered 6 years ago06015-build-out.txt|attachment (403 KB)build-out.txtHi @REDACTEDUSERHere's the tail end of the task dump (ommitted all of the .o files, etc preceding the library options)-lAzGameFramework -lEMotionFXStaticLib -lGem.CloudGemFramework.StaticLibrary.6fc787a982184217a5a553ca24676cfa.v1.1.1 -lAzFramework -lGridMate -lGridMateForTools -lCryAction_AutoFlowNode -lAzCore -lAkMemoryMgr -lAkMusicEngine -lAkSoundEngine -lAkStreamMgr -lAkCompressorFX -lAkConvolutionReverbFX -lAkDelayFX -lAkExpanderFX -lAkFlangerFX -lAkGainFX -lAkGuitarDistortionFX -lAkHarmonizerFX -lAkMatrixReverbFX -lAkMeterFX -lAkParametricEQFX -lAkPeakLimiterFX -lAkPitchShifterFX -lAkRecorderFX -lAkRoomVerbFX -lAkStereoDelayFX -lAkTimeStretchFX -lAkTremoloFX -lAkAudioInputSource -lAkSilenceSource -lAkSineSource -lAkSynthOne -lAkToneSource -lAkSoundSeedImpactFX -lAkSoundSeedWind -lAkSoundSeedWoosh -lCrankcaseAudioREVModelPlayerFX -lAkVorbisDecoder -lMcDSPFutzBoxFX -lMcDSPLimiterFX -lfreetype2 -llz4 -ltomcrypt -ltommath -lexpat -lzlib -lmd5 -llzma -llzss -laws-cpp-sdk-core -lcurl -lssl -lcrypto -lz -laws-cpp-sdk-cognito-identity -laws-cpp-sdk-identity-management -laws-cpp-sdk-lambda -laws-cpp-sdk-gamelift -llua -Wl,-Bdynamic -lc:\\Amazon\\Lumberyard\\1.11.1.0\\dev\\Code -lJ:/work/Android/NDK\\platforms\\android-21\\arch-arm\\usr\\lib -lJ:/work/Android/NDK\\sources\\cxx-stl\\llvm-libc++\\libs\\armeabi-v7a -lc:\\Amazon\\Lumberyard\\1.11.1.0\\dev\\Code\\SDKs -lc:\\Amazon\\Lumberyard\\1.11.1.0\\dev\\Code\\Tools\\CryCommonTools -lc:\\Amazon\\Lumberyard\\1.11.1.0\\dev\\Code\\Tools\\HLSLCrossCompiler\\lib\\android-armeabi-v7a -lc:\\Amazon\\Lumberyard\\1.11.1.0\\dev\\Code\\SDKs\\AWSNativeSDK\\lib\\android\\ndk_r12\\android-21\\armeabi-v7a\\clang-3.8\\Release -landroid -lc -llog -ldl -lc++_shared -lOpenSLES -lAzCore -lHLSLcc -lGLESv2 -lEGL -lm -lGLESv1_CM -laws-cpp-sdk-gamelift'I see -lAzCore in there, but I don't see the AWS core lib specified anywhere. Attached is the full output if you want to take a look -SharerePost-User-0158040answered 6 years ago0hi @REDACTEDUSERSharerePost-User-0158040answered 6 years ago0Here you go - list of all the gems I have enabled: {"GemListFormatVersion": 2,"Gems": [{"Path": "Gems/EMotionFX","Uuid": "044a63ea67d04479aa5daf62ded9d9ca","Version": "0.1.0","_comment": "EMotionFX"},{"Path": "MyGame/Gem","Uuid": "0cc9caf722af45e69b2fefa913212db3","Version": "0.1.0"},{"Path": "Gems/LyShine","Uuid": "0fefab3f13364722b2eab3b96ce2bf20","Version": "0.1.0","_comment": "LyShine"},{"Path": "Gems/CloudCanvasCommon","Uuid": "102e23cf4c4c4b748585edbce2bbdc65","Version": "0.1.0","_comment": "CloudCanvasCommon"},{"Path": "Gems/LegacyGameInterface","Uuid": "3108b261962e44b6a0c1c036c693bca2","Version": "1.0.0","_comment": "LegacyGameInterface"},{"Path": "Gems/CryLegacy","Uuid": "352fef7706634c92814c587e84d7165a","Version": "0.1.0","_comment": "CryLegacy"},{"Path": "Gems/Maestro","Uuid": "3b9a978ed6f742a1acb99f74379a342c","Version": "0.1.0","_comment": "Maestro"},{"Path": "Gems/CertificateManager","Uuid": "659cffff33b14a10835bafc6ea623f98","Version": "0.0.1","_comment": "CertificateManager"},{"Path": "Gems/CloudGemFramework/v1","Uuid": "6fc787a982184217a5a553ca24676cfa","Version": "1.1.1","_comment": "CloudGemFramework"},{"Path": "Gems/GameLift","Uuid": "76de765796504906b73be7365a9bff06","Version": "2.0.0","_comment": "GameLift"},{"Path": "Gems/PhysicsEntities","Uuid": "99ea531451fc4f64a5a9fe8f385e8a76","Version": "0.1.0","_comment": "PhysicsEntities"},{"Path": "Gems/Camera","Uuid": "f910686b6725452fbfc4671f95f733c6","Version": "0.1.0","_comment": "Camera"},{"Path": "Gems/LmbrCentral","Uuid": "ff06785f7145416b9d46fde39098cb0c","Version": "0.1.0","_comment": "LmbrCentral"}]}Hope that helps!SharerePost-User-0158040answered 6 years ago0Hi @REDACTEDUSERSharerePost-User-0158040answered 6 years ago0@REDACTEDUSERSharerePost-User-4556816answered 4 years ago0I'm pushing to get this into 1.18... sorry for the wait!SharerePost-User-4556816answered 4 years ago0Hi @Twolewis, I just wanted to let you know that I was able to reproduce the link error with the list of gems you provided. I'll report back as soon as I figure out what's going on!SharerePost-User-6730542answered 6 years ago0Hi @Twolewis, my apologies for the delay. I do see the AWS core library hidden in the linker command. Given that, it now may be a link order issue though I'm not quite sure why it would manifest itself now. Do you have any other AWS gems enabled besides CloudGemFramework mentioned the linker command? I would like to see what the bare minimum is to reproduce this issue locally so I can get to the bottom of it.SharerePost-User-6730542answered 6 years ago"
"Hello,Is there any way to configure an AWS-ELB to provide source IP address when it is not configured as an HTTP load balancer but TCP load balancer?To explain a bit, we would like to to setup SMTP servers on EC2-Instances behind a the ELB, but we need to be sure the receive the right origin IP address within the SMTP protocol which now implements X-Forward-For as HTTP protocol.Any advice would be appreciated.FollowComment"
Elastic Load Balancer to provide source IP address when not defined as HTTP LB.
https://repost.aws/questions/QUhFZq5u3ZQHqd2W7URTMqdg/elastic-load-balancer-to-provide-source-ip-address-when-not-defined-as-http-lb
false
"2Yes, I think what you want is a Network Load Balancer. See: Client IP preservation.CommentShareEXPERTkentradanswered a year agoAWS-User-3194526 a year agoThanks, will check thatShare"
"Question 1: How soon does it take after deploying my EC2 instance for its public IP address to be reachable ?Question 2: I am struggling now for a few days to see the public ip address is not reachable. Currently have a quad zero route under network ACL and also have security group inbound and outbound rules defined to allow all traffic. There is also a quad zero route pointing to the internet gateway ( under routes ) Is there anything else that is required to get my instance ec2-44-198-166-191.compute-1.amazonaws.com to be reachable from the internet?FollowCommentrePost-User-8575837 a year agoThank you for the reply. If every step above is followed and the public address is still not reachable , what else could cause the issue ?ShareAlex_K a year agoSome more detailed troubleshooting tips can be found here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.htmlIt might also be worth creating a small instance in the same subnet to check that the instance is responding there before working backwards through the internet facing configurations.Share"
How Soon Can External Access be possible to My EC2 Instance Public IP Address
https://repost.aws/questions/QUd_lIWiy5QO6z2KFFseiNIA/how-soon-can-external-access-be-possible-to-my-ec2-instance-public-ip-address
false
"2Q1: It should be pretty much immediateQ2: Here are some things to check:That the security group is correctly assigned to the instance and that the rules match the protocol (eg. ICMP) as well as port and source IP rangeThe inbound and outbound rules for the network ACL are correct and assigned to the right VPCAny firewall rules (eg. Windows firewall) on the instance are set correctlyThat the service you're trying to connect to (eg. a webserver) is up and running correctly on the instanceFinally you may consider using the AWS VPC Reachability Analyzer: https://aws.amazon.com/blogs/aws/new-vpc-insights-analyzes-reachability-and-visibility-in-vpcs/CommentShareAlex_Kanswered a year agorePost-User-8575837 a year agoThank you for your reply. I have isolated the issue down to the image. Some instances built with Windows Server 2019 or 2002 had firewall off while other instances it was on and I had to modify it or turn it odd completely for troubleshooting purposes. So really the last steps Are to check the image is configured for the services and modify the Windows firewall accordingly. Thus , I see that not all images are made equal. Best Regards!Share"
"After resolving my first issue with getting a resource data sync set up, I've now run into another issue with the same folder.When a resource data sync is created, it creates a folder structure with 13 folders following a folder structure like:s3://resource-data-sync-bucket/AWS:*/accountid=*/regions=*/resourcetype=*/instance.json}When running the glue crawler over this, a schema is created where partitions are made for each subpath with an = in it.This works fine for most of the data, except for the path starting with AWS:InstanceInformation. The instance information json files ALSO contain a "resourcetype" field as can be seen here.{"PlatformName":"Microsoft Windows Server 2019 Datacenter","PlatformVersion":"10.0.17763","AgentType":"amazon-ssm-agent","AgentVersion":"3.1.1260.0","InstanceId":"i","InstanceStatus":"Active","ComputerName":"computer.name","IpAddress":"10.0.0.0","ResourceType":"EC2Instance","PlatformType":"Windows","resourceId":"i-0a6dfb4f042d465b2","captureTime":"2022-04-22T19:27:27Z","schemaVersion":"1.0"}As a result, there are now two "resourcetype" columns in the "aws_instanceinformation" table schema. Attempts to query that table result in the error HIVE_INVALID_METADATA: Hive metadata for table is invalid: Table descriptor contains duplicate columnsI've worked around this issue by removing the offending field and setting the crawler to ignore schema updates, but this doesn't seem like a great long term solution since any changes made by AWS to the schema will be ignored.Is this a known issue with using this solution? Are there any plans to change how the AWS:InstanceInformation documents are so duplicate columns aren't created.FollowComment"
AWS:InstanceInformation folder created in s3 by Resource Data Sync cannot be queried by Athena because it has an invalid schema with duplicate columns.
https://repost.aws/questions/QUGe-QMY5KQFmaoW8C4lxrCA/aws-instanceinformation-folder-created-in-s3-by-resource-data-sync-cannot-be-queried-by-athena-because-it-has-an-invalid-schema-with-duplicate-columns
false
"Hello Everybody,I plan to migrate AWS Account to different account by all services would be transferred as belowEC2 20-30 InstantsCloudwatchOpenSearhCloudfrontS3Load BalanceEIP for ec2 are more than 20 IP.Therefore, AMI it's seem to be cannot migrate IP together with EC2https://aws.amazon.com/premiumsupport/knowledge-center/account-transfer-ec2-instance/Then, I have checked transferring EIP and found it need to open a support cases for this.https://repost.aws/questions/QUImID0dYDRR24o4PAEUPvyg/is-it-possible-move-migrate-eip-to-another-accountThis is my questionsTo transfer EIP that Account Basic can do open a ticket for support this?If must be developer, business and enterprise what is minimum type?Just source account or destination must pay as well for support?https://us-east-1.console.aws.amazon.com/support/plans/home?region=us-east-1#/All suggestions and recommendation, Thank you in advance.FollowComment"
Migrate AWS Account
https://repost.aws/questions/QUSKhjpNRLQB6TsJVulD_4iQ/migrate-aws-account
false
"0Hi - Thanks for reaching out.To transfer EIP that Account Basic can do open a ticket for support this?- If you have Basic Support, you can't create a technical support case. Reference : https://docs.aws.amazon.com/awssupport/latest/user/case-management.htmlIf must be developer, business and enterprise what is minimum type?- It starts with Developer , refer : https://aws.amazon.com/premiumsupport/plans/. You can always cancel a paid Support plan, switch to the Basic support plan. Refer : https://aws.amazon.com/premiumsupport/faqs/Just source account or destination must pay as well for support - Looks like from the thread you may have to open from both account , but you can always try with the source to start with. But as mentioned, you can always cancel a paid Support plan, switch to the Basic support plan. Support plans are always account specific.CommentShareEXPERTAWS-User-Nitinanswered 7 months ago"
"I am starting up with the AWS SNS service to send out the SMSSMS Pricing on the page https://aws.amazon.com/sns/sms-pricing/ is mentioned to for India region to be 0.00278 for transactional SMS while when i send out the SMS in the delivery report it comes as **0.0413 ** reports that seems to be 20X high on what is mentioned. I am sending the SMS from the same region **ap-south-1(Mumbai) ** to Indian number.any one know how can this can be fixed ?{ "notification": { "messageId": "badd7f9d-8372-5b80-b481-25ce9cd2af96", "timestamp": "2022-01-08 05:13:04.879" }, "delivery": { "mnc": 78, "numberOfMessageParts": 1, "destination": "+91XXXXXXXXXX", "priceInUSD": 0.0413, "smsType": "Transactional", "mcc": 404, "providerResponse": "Message has been accepted by phone", "dwellTimeMs": 426, "dwellTimeMsUntilDeviceAck": 3522 }, "status": "SUCCESS"}FollowComment"
SMS charges via SNS 20X high then actual
https://repost.aws/questions/QU68x63rPLQjiEGlQv1h_IFg/sms-charges-via-sns-20x-high-then-actual
true
"1Accepted AnswerHello Rajesh,Hope you are doing well.I see you concerned about the SNS SMS charges being twice the amount mentioned in the pricing page.The pricing page states as follows [1]:Network/ HNITransactional SMSPromotional SMSAll Networks$0.00278$0.00278All Networks - International$0.0413$0.0413By default, when you send messages to recipients in India, Amazon SNS uses International Long Distance Operator (ILDO) connections to transmit those messages. When recipients see a message that's sent over an ILDO connection, it appears to be sent from a random numeric ID. [2]As a result, you are incurring charges as mentioned for All Networks - InternationalIf you wish to incur charges for the pricing mentioned for All Networks which is $0.00278, you will have to send SMS through local routes.To send messages using local routes, you must first register your use case and message templates with the Telecom Regulatory Authority of India (TRAI) through Distributed Ledger Technology (DLT) portals.Hope this helps.References:1: https://aws.amazon.com/sns/sms-pricing/2: https://docs.aws.amazon.com/sns/latest/dg/channels-sms-senderid-india.htmlRegards,HarshaVardhanGCommentShareHarshaVardhanGanswered a year agoRajesh a year agothanks, it provides the clarity on why the issue happened, i will explore on the step mentioned to register via local routes, let me check and see can build anything on top of this.ShareShehan Jayawardane 7 months agoHi Harsha, above cost $0.00278 is per SMS?Share"
"It seems that there is some inconsistencies in how the Amazon Polly Console (and files processed through the CLI or API) is handling non-coded pauses when using <break> or where the </speak> is placed in the file.Here are some examples to illustrate the problem - it doesn't matter if a neural voice or standard voice is used.Example 1:<speak>Article 1Video provides a powerful way to help you prove your point. When you click Online Video, you can paste in the embed code for the video you want to add. You can also type a keyword to search online for the video that best fits your document.</speak>```A pause is inserted after Article 1 since there is a paragraph break which is the expected result. Example 2:<speak>Article 1Video provides a powerful way to help you prove your point. When you click Online Video <break time = ".3s"/> you can paste in the embed code for the video you want to add. You can also type a keyword to search online for the video that best fits your document.</speakA <break> is inserted in place of the comma. When this occurs, the paragraph break after Article 1 is ignored so there is no pause between Article 1 and Video. This is unexpected.Example 3:<speak>Article 1Video provides a powerful way to help you prove your point. When you click Online Video, you can paste in the embed code for the video you want to add. You can also type a keyword to search online for the video that best fits your document.</speak>```Similar to Example 1, a pause **should ** occur after Article 1 since there is a paragraph break. This does not occur. Notice that there is no line break before </speak>. Any clarification would be most helpful as results are inconsistent now and it would be time consuming to have to insert`<p></p>` throughout to get expected results. Edited by: vabtm on Jul 3, 2020 1:57 PMFollowComment"
<break> Inconsistencies?
https://repost.aws/questions/QU_XBx8dkkS2CnaPUa7RC6XA/break-inconsistencies
false
"0Hi vabtm,Thank you for reaching out to us. A single newline doesn't enforce a paragraph break, it may happen that the text is split to multiple chunks which could result in having a pause after "Article 1", our recommendation is to use either p tag or more than one single newline to introduce the pause. Examples:<speak><p>Article 1</p><p>Video provides a powerful way to help you prove your point. When you click Online Video <break time = ".3s"/> you can paste in the embed code for the video you want to add. You can also type a keyword to search online for the video that best fits your document.</p></speak>or<speak>Article 1Video provides a powerful way to help you prove your point. When you click Online Video <break time = ".3s"/> you can paste in the embed code for the video you want to add. You can also type a keyword to search online for the video that best fits your document.</speak>Thanks,TarekCommentShareAWS-User-4110704answered 3 years ago0Thanks for responding, Tarek. I did some additional testing to confirm by using this example:<speak>Apple Oranges Grapes</speak>```That produces what you would expect - "Apple Oranges Grapes"<speak>AppleOrangesGrapes</speakThis also produces "Apple Oranges Grapes" as in the example above.To get the break, you need to have at least two newlines to create a pause as in this example.<speak>Apple Oranges Grapes</speak>```CommentSharevabtmanswered 3 years ago0Hi Vabtm,Yes, your examples shows how it works indeed. Please reach out to us, if you have any other related issues.Best,FatihCommentShareAWS-User-7152847answered 3 years ago"
"I am unable to delete a couple of buckets that i have. I get a generic error that it failed when i try both "Delete" and "Empty". I would open a ticket, however, i do not have a support plan. Any ideas? The buckets are large a contain 10s of thousands of items.Edited by: guero2356 on Nov 4, 2019 1:01 PMFollowComment"
Error Deleting Buckets
https://repost.aws/questions/QU4TdxEmQkSKKq8fnkKuXS1w/error-deleting-buckets
false
0Luckily it is working todayCommentShareguero2356answered 4 years ago
My customer is looking for guidance on using S3 Select on pre-signed URLs. Is this possible? Is there any documentation around this?FollowComment
S3 Select from pre-signed URLs?
https://repost.aws/questions/QUYDu9WjnnR3ipeACyZBbcwQ/s3-select-from-pre-signed-urls
true
"0Accepted Answerthis is not possible, as the method takes object name and bucket name instead of url as documented here:https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectSELECTContent.htmlCommentShareLuis_Canswered 4 years ago"
"Hi, I am trying to fetch the instance name by using the instance id in Lambda function (using Python language) via the below commands.if sys.argv[0] == 'ON':monitor = ec2.monitor_instances(InstanceIds=[id])else:monitor = ec2.unmonitor_instances(InstanceIds=[id])The lambda function gets executed when an alarm is in active state.I am getting error as,[ERROR] ClientError: An error occurred (UnauthorizedOperation) when calling the MonitorInstances operation: You are not authorized to perform this operation.I have full ec2 access, still not able to execute these commands. Do I need any specific role for this or are these commands outdated?Can you suggest any other command which can be executed in lambda function to get the instance name using the instance id?FollowComment"
ec2.monitor_instances() operation giving unauthorize error.
https://repost.aws/questions/QUz-YjkFInRSSa-u_OFpPCrQ/ec2-monitor-instances-operation-giving-unauthorize-error
true
"0Accepted AnswerThe sample code you're given isn't really applicable in Lambda (because there isn't a sys.argv[0] to reference) - but I'm going to assume that it isn't the actual code you're using.What is in the variable id - have you checked to make sure that it is a valid instance id and it is an instance id that you own?Second, if your account is part of an Organization could the permissions be denied by a Service Control Policy?Otherwise, the IAM troubleshooting guide might be of assistance.CommentShareEXPERTBrettski-AWSanswered a month agorePost-User-0363890 a month agoid variable is the instance id that we are getting from the alarms detail. Yes it is a valid instance id.But the command ec2.monitor_instances(InstanceIds=[id]) should work right?Sure will check for Service Control Policy.ShareBrettski-AWS EXPERTa month agoYes, that command should work - that you're getting UnauthorizedOperation means it's a permissions problem.SharerePost-User-0363890 a month agoOkay Thankyou.Share"
"I'm trying to run an aws lightsail command, from within my Amazon Linux 1 instance, to update the caching settings on my distribution.When I first created the command it worked fine ... but after a while it stopped working and gave an "An HTTP Client raised an unhandled exception: 'module' object has no attribute 'raise_from'" error.I ran the command with the debug parm and got the output at the bottom of this post (partially, the full debug output won't fit in a post).Any suggestions? It seems I'm missing a python module, but don't know python at all (Java & RPG are my thing).Thanks!David2022-12-09 07:44:33,646 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=POST, url=https://lightsail.us-east-1.amazonaws.com/, headers={'Content-Length': '1118', 'X-Amz-Target': 'Lightsail_20161128.UpdateDistribution', 'X-Amz-Date': '20221209T134433Z', 'User-Agent': 'aws-cli/1.18.107 Python/2.7.18 Linux/4.14.248-129.473.amzn1.x86_64 botocore/1.17.31', 'Content-Type': 'application/x-amz-json-1.1', 'Authorization': 'AWS4-HMAC-SHA256 Credential=qqqqqqq/20221209/us-east-1/lightsail/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-target, Signature=xxxxxxxx'}>2022-12-09 07:44:33,646 - MainThread - botocore.httpsession - DEBUG - Exception received when sending urllib3 HTTP requestTraceback (most recent call last):File "/usr/lib/python2.7/dist-packages/botocore/httpsession.py", line 250, in sendconn = manager.connection_from_url(request.url)File "/usr/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 290, in connection_from_urlu = parse_url(url)File "/usr/lib/python2.7/dist-packages/urllib3/util/url.py", line 392, in parse_urlreturn six.raise_from(LocationParseError(source_url), None)AttributeError: 'module' object has no attribute 'raise_from'2022-12-09 07:44:33,653 - MainThread - botocore.hooks - DEBUG - Event needs-retry.lightsail.UpdateDistribution: calling handler <botocore.retryhandler.RetryHandler object at 0x7fa41e1f5c90>2022-12-09 07:44:33,653 - MainThread - awscli.clidriver - DEBUG - Exception caught in main()Traceback (most recent call last):File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 217, in mainreturn command_table[parsed_args.command](remaining, parsed_args)File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 358, in callreturn command_table[parsed_args.operation](remaining, parsed_globals)File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 530, in callcall_parameters, parsed_globals)File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 650, in invokeclient, operation_name, parameters, parsed_globals)File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 662, in _make_client_call**parameters)File "/usr/lib/python2.7/dist-packages/botocore/client.py", line 316, in _api_callreturn self._make_api_call(operation_name, kwargs)File "/usr/lib/python2.7/dist-packages/botocore/client.py", line 622, in _make_api_calloperation_model, request_dict, request_context)File "/usr/lib/python2.7/dist-packages/botocore/client.py", line 641, in _make_requestreturn self._endpoint.make_request(operation_model, request_dict)File "/usr/lib/python2.7/dist-packages/botocore/endpoint.py", line 102, in make_requestreturn self._send_request(request_dict, operation_model)File "/usr/lib/python2.7/dist-packages/botocore/endpoint.py", line 137, in _send_requestsuccess_response, exception):File "/usr/lib/python2.7/dist-packages/botocore/endpoint.py", line 256, in _needs_retrycaught_exception=caught_exception, request_dict=request_dict)File "/usr/lib/python2.7/dist-packages/botocore/hooks.py", line 356, in emitreturn self._emitter.emit(aliased_event_name, **kwargs)File "/usr/lib/python2.7/dist-packages/botocore/hooks.py", line 228, in emitreturn self._emit(event_name, kwargs)File "/usr/lib/python2.7/dist-packages/botocore/hooks.py", line 211, in _emitresponse = handler(**kwargs)File "/usr/lib/python2.7/dist-packages/botocore/retryhandler.py", line 183, in callif self._checker(attempts, response, caught_exception):File "/usr/lib/python2.7/dist-packages/botocore/retryhandler.py", line 251, in callcaught_exception)File "/usr/lib/python2.7/dist-packages/botocore/retryhandler.py", line 269, in _should_retryreturn self._checker(attempt_number, response, caught_exception)File "/usr/lib/python2.7/dist-packages/botocore/retryhandler.py", line 317, in callcaught_exception)File "/usr/lib/python2.7/dist-packages/botocore/retryhandler.py", line 223, in callattempt_number, caught_exception)File "/usr/lib/python2.7/dist-packages/botocore/retryhandler.py", line 359, in _check_caught_exceptionraise caught_exceptionHTTPClientError: An HTTP Client raised an unhandled exception: 'module' object has no attribute 'raise_from'2022-12-09 07:44:33,660 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255An HTTP Client raised an unhandled exception: 'module' object has no attribute 'raise_from'FollowComment"
HTTP Client raised an unhandled exception: 'module' object has no attribute 'raise_from'
https://repost.aws/questions/QUsGeJzs_BSKqvtfMhAiE2GQ/http-client-raised-an-unhandled-exception-module-object-has-no-attribute-raise-from
true
0Accepted AnswerIt's difficult to tell but my guess here is "old software".You're running Amazon Linux 1 which is in a limited support phase so I'd recommend upgrading to Amazon Linux 2 very soon.It also appears as if you're running an old version of the AWS CLI which is using Python 2.7 which went end of life in 2020.So please update your operating system and your AWS CLI - version 2 of the CLI is now available.CommentShareEXPERTBrettski-AWSanswered 5 months agoDavid G 5 months agoThanks ... I upgraded my python version and the problem went away.Share
I run the commandaws batch delete-compute-environment --compute-environment batch-environmentIt returns with no other messages but the compute environment still persist.FollowComment
how to delete compute environment that is invalid ?
https://repost.aws/questions/QUQDKKXZjnQZGwAAe6bQYB8g/how-to-delete-compute-environment-that-is-invalid
true
"0Accepted AnswerPlease check if service role of the compute environment exists. AWS Batch uses service roles to manage EC2 and ECS resources on your behalf. Is the role had been deleted prior to deleting the compute environment, then AWS Batch is not able to delete the compute resources and the enters INVALID state. In case if the role was deleted, then create service role again with the same name and try deleting the compute environment again.CommentSharealex_lykasovanswered a year agoEXPERTJohn_Freviewed a year agoNicholas Yue a year agoYes, the service role still exists.I have gone into the Batch web ui to look again and see this STATUSINVALID - CLIENT_ERROR - User: arn:aws:sts::xxxxxxxxxx:assumed-role/batch_role/aws-batch is not authorized to perform: ecs:ListClusters on resource: * because no identity-based policy allows the ecs:ListClusters action (Service: AmazonECS; Status Code: 400; Error Code: AccessDeniedException; Request ID: zzzzzzzzzzzzzzzzzzzzzzzzz; Proxy: null)Is the deletion problem due to the service role not having the right permissions ? If so, what permission should I set it to ?Sharealex_lykasov a year agoYes, it needs ecs:ListClusters action and some others. Please compare your role's permissions with those described in the documentation here https://docs.aws.amazon.com/batch/latest/userguide/service_IAM_role.htmlShare0If you don't have a Spot Fleet role, complete the following steps to create one for your compute environment:Open the IAM console.In the navigation pane, choose Roles.Choose Create role.Choose AWS service. Then, choose EC2 as the service that will use the role that you're creating.In the Select your use case section, choose EC2 Spot Fleet Role.Important: Don't choose the similarly named EC2 - Spot Fleet.Choose Next: Permissions.Choose Next: Tags. Then, choose Next: Review.For Role name, enter AmazonEC2SpotFleetRole.Choose Create role.Note: Use your new Spot Fleet role to create new compute environments. Existing compute environments can't change Spot Fleet roles. To get rid of the obsolete environment, deactivate and then delete that environment.Open the AWS Batch console.In the navigation pane, choose Compute environments.Choose the compute environment that's in the INVALID state. Then, choose Disable.Choose Delete.Deactivate and delete your compute environmentYou must deactivate and delete your compute environment because the launch template associated with your compute environment doesn't exist. This means that you can't use the compute environment associated with your launch template. You must delete that compute environment, and then create a new compute environment.Open the AWS Batch console.In the navigation pane, choose Compute environments.Select the compute environment that's in the INVALID state. Then, choose Disable.Choose Delete.Create a new compute environment.Deactivate and then activate your compute environmentOpen the AWS Batch console.In the navigation pane, choose Compute environments.Choose the compute environment that's in the INVALID state. Then, choose Disable.Choose the same compute environment from step 3. Then, choose Enable.Related informationCommentShareUsha kumarianswered a year ago0The suggested steps:Open the AWS Batch console.In the navigation pane, choose Compute environments.Select the compute environment that's in the INVALID state. Then, choose Disable.Choose Delete.did not work in my situation. The compute environment is still there.CheersCommentShareNicholas Yueanswered a year ago"
"Am new to IoT Twin Maker. I have an entity that has a component with properties humidity and temperature. Am able to test this component and am getting results successfully.I need to write a Lambda function that will ready this JSON, do a formatting and expose the JSON via REST API. What steps should I follow? Any sample code would help.I tried creating a lambda function asresponse = client.get_property_value(componentName='string',componentTypeId='string',entityId='string',selectedProperties=['string',],workspaceId='string')Am getting the error "An error occurred (ValidationException) when calling the GetPropertyValue operation: No attributePropertyValueReaderByEntity connector defined for a query within entity : <<entityID>>Am sure am missing something, please guide.FollowComment"
AWS IoT Twin Maker - read data from Entities
https://repost.aws/questions/QU-Sq8vmwRRbi9BKcRGY7jFg/aws-iot-twin-maker-read-data-from-entities
false
"0Hi,it seems that the connector you are using does not support get_property_value. You can try if get_property_value_history is working. Not every connector supports necessarily both methods.Cheers,PhilippCommentSharePhilipp Sanswered 7 months ago0You may also need to check your component type model: for GetPropertyValue on properties marked as "isStoredExternally": true you should have an attributePropertyValueReaderByEntity definition under the functions in your component type (e.g. instead of "dataReader" in this example)The lambda function used to implement the above reader should follow this request/response interface: docsCommentSharejohnnyw-awsanswered 3 months ago"
"Hi,Memory usage on DocumentDB is getting lower slowly but surely.What may be causing such behavior?When should we increase the instance type?According to docs the rule of thumb is to increase the memory amount once we hit less than 1/3 of total memory, but the reason for the increase is what we are more interested in.FollowComment"
DocumentDB memory usage
https://repost.aws/questions/QUZMXTnRb8S6uPBdB6e69sLg/documentdb-memory-usage
false
"0There can be several reasons for the memory usage on DocumentDB to be getting lower slowly but surely:Database activity: If there is less activity on the database, such as fewer reads and writes, then there will be less memory usage.Automatic indexing: DocumentDB automatically indexes all data in a collection, so if the data set is small, the memory usage will be lower.Data retention policies: If there are data retention policies in place, such as automatic deletion of old data, then the amount of data stored in the database will decrease, resulting in lower memory usage.Cache eviction: DocumentDB uses a Least Recently Used (LRU) cache eviction policy, which evicts the least recently used items from cache when it reaches its limit. If the database is not being used actively, the cache will evict items and the memory usage will decrease.As for the question of when to increase the instance type, the rule of thumb is to increase the memory amount once we hit less than 1/3 of total memory, but the reason for the increase is more important, you should consider the following:If the workload is increasing and the database is not able to handle the increased load, then the instance type should be increased to handle the additional load.If you are seeing frequent throttling errors, it is likely that the current instance type does not have enough resources to handle the workload, and you should consider increasing the instance type.If you are seeing high CPU usage, it may be a good idea to increase the instance type to handle the increased workload.If you are seeing high memory usage, it may be a good idea to increase the instance type to handle the increased workload.It is important to monitor your database's performance and usage to determine when it is necessary to increase the instance type.CommentSharejayamaheizenberganswered 4 months ago0Hi,Thank you for your answer @jayamaheizenberg.Can you please provide info on what specifically may cause increasing memory usage in DocumentDB?CommentShareAWS-User-9075196answered 4 months ago0Hello. Which metric are you referring to, is it FreeableMemory? There's good insight and recommendations in the documentation which says that when this metric goes below 10% of total instance memory, then scale up the instance.To gather more information you could enable Performance Insights (this is done at instance level). Try correlating the free memory drop with the other metrics, increase in database connections, opcounters (inserts, updates, query, getmore), CPU usage which means the workload has increased. In PI you can check the top queries utilising the most resources that can lead to identifying the root cause.CommentShareMihai Aanswered 4 months ago"
"Greetings,I'm trying to set up launching an instance in EC2 g5 and I keep getting an error saying I don't have the number of permissions for vcpu's. When I contacted support for an increase in authorization, they declined and said it could incur large fees. I want to be able to use the on-demand g5 cloud computing for pixel streaming Unreal Engine 5 to my business clients. How do I utilize the AWS services for this? Maybe I am missing something or using the incorrect instance setup?FollowComment"
Problem launching instance of EC2 server for GPU intensive applications.
https://repost.aws/questions/QUvN3JWV2IRamSM5FQv7yCig/problem-launching-instance-of-ec2-server-for-gpu-intensive-applications
false
"0Hi,Can you share the exact error message you are receiving?Reason I ask this is for the following reasons:You might genuinely not have the permissionsYour account might still be to new to launch large instance types and you might need to opt for a smaller instance.What you can do is check what you can launch by navigating to service quotas, see link to check on how to do this. See my example below to see howmuch G-instances I can launch and available vCPU limit:Navigate to service quotas (Top right, under account ID)Select Amazon Elastic Compute Cloud (Amazon EC2) to view quotas for EC2Select the All G and VT Spot Instance Requests to view howmuch G type instances you can launchSo based on the quota above the account is able to launch G-type instances with a limit of 64 vCPUs this can be:1x g4dn.16xlarge2x g4dn.8xlarge4x g4dn.4xlargeAnd so forth, now important to note that each account will have its own set of assigned quotas and these values will be regional especially when it comes to large instance types. I would suggest you check what your current quota is set too, if your account is still new as mentioned above you will require a special approvals to launch certain instance types.You can review this link: https://aws.amazon.com/marketplace/pp/prodview-iu7szlwdt5clg to get an idea of the overall hourly spend for unreal on G5, from the marketplace link it looks like your cheapest option will beg4dn.2xlarge$0$1.12 [hourly] in us-east-1 Hope this helps, feel free to reach back with the error code and I'll advise accordingly.CommentShareSUPPORT ENGINEERLundianswered 5 months agorePost-User-8632709-Mary 5 months agoThank you so much for your detailed answer! I checked the service quotas and indeed I have 0 for the g and vt instances. How do I go about getting the permissions? I have 1k credit from game jam with unreal and we wanted to use some of it to get an instance for a bit to host an in-development project for our business partner to check out.SharerePost-User-8632709-Mary 5 months agoWhen I asked for an increase they said no via support. Are there other EC2 instances that I could use that would at least work for now?SharerePost-User-8632709-Mary 5 months agoAfter fiddling with it, I launched a large T3 instance and will see if it will work for our pixel streaming. Hopefully, it will work for us! Thanks again for your answerShare"
"Hi Team,Could you please explain the significance of filters in Inspector V2 and also I would need a cloudformation code to enable filters in Inspector V2.Please help me with a cloudformation code example on filters in Inspector V2ThanksFollowComment"
Enabling Filters in Inspector V2
https://repost.aws/questions/QUFSL6YAePTt6ZLfoG-vAY4Q/enabling-filters-in-inspector-v2
false
"Hi,As I want to transfer the objects from BUcket A in Account A to Bucket B in Account B. I want to get the same Last Modified dates same can I get that?FollowComment"
Migration of objects to different bucket in diff account
https://repost.aws/questions/QUpaw_MnUrQAKrB5f-sFmVOA/migration-of-objects-to-different-bucket-in-diff-account
false
0Hello ThereYou cannot modify metadata like the Last-Modified date. See https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html#SysMetadataYou can copy the Last-Modified date to user-defined metadata on each object if you need to retain a record. Check this previous thread.https://repost.aws/questions/QUIzqoKL9OQZiETz6ZHHSHiQ/is-there-a-way-to-not-change-an-s-3-objects-last-modified-date-when-copying-it-to-a-new-bucketCommentShareEXPERTMatt-Banswered 4 months ago
"I'm still quite new to gamelift services and spending considerable time testing and learning at my own pace. I'd like to use my free tier hours for this learning stage but if I leave a fleet running for hours during which I'm not tinkering around, it would basically be eating away at those free hours... is there a way to avoid this without having to wait for a fleet to activate every time I want to do some actual online testing (not using gamelift local)? So far I tried to create a fleet of type spot and manually set the instances to 0, however I could still log into game sessions from the client. I think it had to do with having protection set to "full" but I wanted to get some solid direction on here before I get to trying this again.If anyone can point me in the right direction I would appreciate it, perhaps there is an easier way through CLI commands as well?FollowComment"
Temporarily 'pausing' a fleet
https://repost.aws/questions/QUN8u2VQTgSb23PdGA4VzYxg/temporarily-pausing-a-fleet
false
"0Hi @REDACTEDUSERSetting the instance count to 0 using the console or UpdateFleetCapacity API, is the correct approach for this.You’re correct that this will not scale-down the fleet if it has an ACTIVE GameSession on it and protectionPolicy is set to FullProtection. You can remove protection from a particular GameSession by calling the UpdateGameSession API or remove protection from all new GameSessions created on a fleet by using the UpdateFleetAttribute APIThese APIs can be called via the AWS CLI - https://docs.aws.amazon.com/cli/latest/reference/gamelift/update-game-session.htmlOne more thing to note, since you mentioned free-tier usage is that GameLift free-tier is only eligible for On-Demand instance usage. So make sure you’re not using SPOT if you wish to use those hourshttps://aws.amazon.com/gamelift/faq/#Q._How_do_I_get_started_with_Amazon_GameLift_for_free.3FCommentSharerePost-User-3818694answered a year ago0Hi @REDACTEDUSERThanks for the detailed explanation and all the helpful links! I was under the impression that I wouldn't have any control over instance count for On-Demand instance usage.CommentSharerePost-User-1971946answered a year ago0Hey @REDACTEDUSERYou are able to control instance counts for on-demand usage the same as spot usage (primarily by the defined fleet capacity).The main difference between the two is that SPOT instances are cheaper, but are subject to the possibility of being interrupted (and shut-down) when AWS needs the capacity back. You can read more about the differences here - https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-ec2-instances.html#gamelift-ec2-instances-spotCommentSharerePost-User-3818694answered a year ago0Thanks @REDACTEDUSERCommentSharerePost-User-1971946answered a year ago"
"Some tasks within Amazon Forecast can take a long time, and I would like it to trigger a Lambda function once it finishes a long task (creating import jobs, training models, making forecasts, etc...) to update my database for exampleHow can I do it?Can I use something like Amazon SNS or SQS?Edited by: luizt on Jan 25, 2021 7:39 AMFollowComment"
Make Forecast trigger a Lambda function when done performing task
https://repost.aws/questions/QUO1PErIgkSESZJSXagFnTpA/make-forecast-trigger-a-lambda-function-when-done-performing-task
true
"0Accepted AnswerYou may use a combination of AWS Lambda, AWS Step Functions, and Amazon CloudWatch events to periodically check and take the next actions accordingly. Please refer to the blog https://aws.amazon.com/blogs/machine-learning/automating-your-amazon-forecast-workflow-with-lambda-step-functions-and-cloudwatch-events-rule/#:~:text=Machine%20Learning%20Blog-,Automating%20your%20Amazon%20Forecast%20workflow%20with%20Lambda,Functions%2C%20and%20CloudWatch%20Events%20rule&text=Amazon%20Forecast%20is%20a%20fully,requiring%20any%20prior%20ML%20experience and https://aws.amazon.com/blogs/machine-learning/building-ai-powered-forecasting-automation-with-amazon-forecast-by-applying-mlops/ for more details.CommentSharechakravnanswered 2 years ago0You can also consider the following solution implementation:Improving Forecast Accuracy with Machine Learning<https://aws.amazon.com/solutions/implementations/improving-forecast-accuracy-with-machine-learning/ >This solution will update you via an SNS topic when a forecast is complete.CommentSharePaul_Manswered 2 years ago"
"Before my free tier expires, I would like to cancel all my hanging service. I tried to find them but got no luck. It's "Amazon Elastic Compute Cloud" and "AmazonCloudWatch". Could someone tell me how do I do that? Thanks.FollowComment"
Don't know how to cancel services
https://repost.aws/questions/QUEc0V5rjkS5O-LchjQ5eyTA/don-t-know-how-to-cancel-services
false
"1There is a good guide on identifying services that you want to shut down here: https://aws.amazon.com/premiumsupport/knowledge-center/check-for-active-resources/Once identified, you can use the following guide for details on how to terminate them: https://aws.amazon.com/premiumsupport/knowledge-center/terminate-resources-account-closure/Specifically for Amazon Elastic Compute Cloud (EC2) you can use the AWS Console to view any current instances you have configured by searching for and selecting EC2. As EC2 is regional, you will need to confirm you have no instances in any region - you can use the EC2 Global view to get a quick overview. In the screenshot below you can see that there is one instance configured in Europe (Ireland) eu-west-1.If you click on this number, you will get a link to the instances which will include resource ID links you can click on to allow you to manage the instance and from here you can click the Instance State button and choose Terminate Instance to shut down any unwanted resources so you are not billed.For a full set of steps to follow to ensure all services relating to EC2 are terminate see here: https://aws.amazon.com/premiumsupport/knowledge-center/delete-terminate-ec2#Delete_or_terminate_EC2_resourcesAmazon CloudWatch collects metrics and data from your other AWS services so once all other services are terminated no additional data or logs should be generated. You can then delete any existing Log Groups from the AWS Console by going to CloudWatch > Log Groups > Select log groups > Actions > Delete log group(s).After you have completed all of the above I would advise closing your AWS account entirely which you can do by following this guide (you can always re-open a new account if you need one at a later date): https://aws.amazon.com/premiumsupport/knowledge-center/close-aws-account/NOTE: There is a 90-day period after closing an account during which any non-terminated services may continue to accrue charges so ensure you have carried out the above steps first.CommentShareCarlPanswered 8 months ago"
My customer wants to connect datacenter in Germany to us-west-2 AWS. They are considering a Direct Connect partner and the question is how to get traffic into us-west-2.Would Direct Connect Gateway solve this problem? As far as I understand Direct Conncet Gateway will allow on-prem to connect to any region on AWS.Any limitations other than what's called out in FAQ?FollowComment
AWS Direct Connect traffic from on-prem DC to remote AWS Region
https://repost.aws/questions/QU0MVC2weCQKOjN-uUR5gLaQ/aws-direct-connect-traffic-from-on-prem-dc-to-remote-aws-region
true
"1Accepted AnswerCorrect - a Direct Connect Gateway will allow your customer to connect to VPCs in any AWS Region, via a Direct Connect location in Germany.You can attach up to 10 VGWs per Direct Connect Gateway; more limits information is in our docs here.CommentShareEXPERTmhjworkanswered 4 years ago"
"So, we've inherited an infra not using the best practices from AWS, and there is a single t2.xlarge machine instance serving Images already stored in it and some backend logic.While we are creating a new Infra using S3 + Cloudfront and ASG, we have to save the day until that happens.We're trying to understand how to remediate the issue(s), which are two fold:"Waiting for the response of the server for an image request takes 24.45 seconds"🤯, and 2) "127kb takes 13.48 seconds to be downloaded" as the following image proves:For the first problem:Looking at CPU Monitor we discarded that to be the problem. It just reaches 7% of CPU at that time.So our guess is that we are being limited by request per second or pps?Inbound packets at that moment: 17K (5min resolution), outbound packets at that moment: 26K (5min resolution).Is there any way to understand if we are being limited somehow due requests or pps? Otherwise CPU would be burning..but seems it is not the case? We couldn't find a standard limit. This is just a normal EC2 instance.For the second problem:Looking at the Network: Input at that moment: 1.29M (5min resolution) Output at that moment: 94.4M (5 min resolution).Seems t2.xlarge has a base bandwidth of 0.75Gbps aka 93Mp/s. Pretty close to the the limit but still there? After all those 94.4 are in 5 mins not per second...Also, by the documentation, t2.xlarge seems to be a Burstable Network instance up to 1.024Gbps and uses credits for that. Where can I found a Monitor of those Network Burstable Credits? Just to check if they were depleted? We can find the CPU Credits graph but not the Network ones?Thanks a lot!FollowComment"
EC2 serving images slow: PPS issue? Bandwith issue? Both of them?
https://repost.aws/questions/QUx9jfvBitRDWZQD8uFU1JAA/ec2-serving-images-slow-pps-issue-bandwith-issue-both-of-them
false
0Have you tried turning on Unlimited for the instance to eliminate any bursting limits?CommentShareEXPERTkentradanswered 2 months ago
"Endpoints for Amazon DynamoDB is documented herehttps://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-ddb.htmlI don't see any mention of setting up cross account access for Endpoints for Amazon DynamoDB, is it supported? How can our customer achieve it?FollowComment"
Amazon DynamoDB across account access with VPC endpoints
https://repost.aws/questions/QURYC5wUuVTIGLsJ-Z2-Ks7g/amazon-dynamodb-across-account-access-with-vpc-endpoints
true
"0Accepted AnswerWhat you are trying to do is access a DynamoDB table in a different account.DynamoDB does not support Resource Based Policies (c.f. S3, KMS, SQS to name a few) the way you access DynamoDB is always with a principal of the account that provisioned the DynamoDB Table resource. So, by assuming a role in the account with the table you can access it.Here is the process for cross-account role assume:1: Create a role with access to the DynamoDB table in the DynamoDB account. I'll throw a rough example of what the IAM setup would look like below, note the variables you need to fill in in the <> blocks:DynamoDB Role Trust Policy:{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Principal": { "AWS": "arn:aws:iam::<AppAccountID>:root" } } } DynamoDB Role IAM Policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "dynamodb:List*", "dynamodb:DescribeReservedCapacity*", "dynamodb:DescribeLimits", "dynamodb:DescribeTimeToLive" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "dynamodb:BatchGet*", "dynamodb:DescribeStream", "dynamodb:DescribeTable", "dynamodb:Get*", "dynamodb:Query", "dynamodb:Scan", "dynamodb:BatchWrite*", "dynamodb:CreateTable", "dynamodb:Delete*", "dynamodb:Update*", "dynamodb:PutItem" ], "Resource": "arn:aws:dynamodb:*:*:table/<TableName>" } ] }2: Create a role in the other account that is allowed to assume the DynamoDB role:IAM Policy for App Role { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<DynamoDBAccountID>:role/<DynamoDBRoleName>" } }3: Assume the role in your app. Here is an example in Python:AssumeRole.py`import boto3def assumerole(account, rolename):sts_client = boto3.client('sts')# Call the assume_role method of the STSConnection object and pass the role# ARN and a role session name.assumedRoleObject = sts_client.assume_role( RoleArn="arn:aws:iam::" + account + ":role/" + rolename, RoleSessionName=account + "-" + rolename.replace('/',''))return assumedRoleObject`4: Run your DynamoDB commands with the assumed role's credentialsAs for the networking side, just make sure your VPC in the application account has a DynamoDB endpoint and you should be good to go.CommentShareAWS-User-7543292answered 4 years agorePost-User-4192123 5 months agoBut this doesn't answer the question for cross-account VPC endpoint.e.g. I have a dynamo DB in account A and the AWS lambda function in account B.Created a VPC endpoint for dynamo DB in account B.I have created a cross-account role in Account A for Account B, to access dynamo DB (Created in Account A) in Account B via the AWS Lambda function.The cross-account role contains the following policy with VPC endpoint condtion (created in Account B) conditions.{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "dynamodb:Scan", "Resource": "*", "Condition": { "StringEquals": { "aws:SourceVpce": "vpce-xxxxxxxxxxxx" } } } ]}The question is can we set up a cross-account VPC endpoint?Share"
"I find myself continually migrating further into the AWS eco-system. I would love to make Cloud9 a larger part of my workflow for ease of integration with both GIT and Lambda functions. As part of my writing Python scripts, I would love to interactively run snippets of scripts and have code sent to an ipython (or equivalent) terminal for interactive executuion. Is this possible out of the box? or by adding some keyboard shortcut?FollowComment"
Easy Interactive Python - Is there something akin to VS Code's Shift+Enter for sending code to ipython terminal in Cloud9?
https://repost.aws/questions/QUT4vu6DxVQnSM5znJasJyXg/easy-interactive-python-is-there-something-akin-to-vs-code-s-shift-enter-for-sending-code-to-ipython-terminal-in-cloud9
false
"hi i'm unable to set virtual MFA, i've tried Authy and Google Authenticator. Both does not seem to work! Scanned th eQR code and keyed in the pin code twice but it does not authenticate. i can assure that the code is correct. Any ideas?FollowComment"
Invalid MFA Code
https://repost.aws/questions/QUoxs46k65RWqKa9c1d8IJBg/invalid-mfa-code
true
"2Accepted AnswerHi, when you're setting up the MFA, it should ask you for 2 different codes, I guess you're copying only the first code twice. Try to re-assigned the MFA, scan the code and wait for the 2 different codes to see if that works.Cheers!-D. CommentSharedominguezdanielanswered a year agorePost-User-6780549 a year agothanks that worked!Share0I can't say for sure what the issue is, but I'd recommend first trying these steps, if you haven't already:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_lost-or-broken.htmlIf you're still stuck, you can reach our MFA team for assistance:go.aws/contact-mfa— Chrissy B.CommentShareEXPERTAWS Support - Chrissyanswered a year ago"
"I configured AWS Backup in CDK to enable continuous backups for s3 buckets with this configuration :backup rule : with enableContinuousBackup: true and deleteAfter 35 daysbackup selection : with resources array having the ARN of the bucket directly set and roles setup following the docs of aws : https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.htmlLater I deleted the stack in CDK and ,as expected, all the resources were deleted except for the vault that was orphaned.The problem happens when trying to delete the recovery points inside the vault, I get back the status as Expired with a message Insufficient permission to delete recovery point.I am logged in as a user with AdministratorAccessI changed the access policy of the vault to allow anyone to delete the vault / recovery pointeven when logged as the root of the account, I still get the same message.For reference, this is aws managed policy attached to my user : AdministratorAccess , it Allows (325 of 325 services) including AWS Backup obviously.Here's the vault access policy that I set :{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "backup:DeleteBackupVault", "backup:DeleteBackupVaultAccessPolicy", "backup:DeleteRecoveryPoint", "backup:StartCopyJob", "backup:StartRestoreJob", "backup:UpdateRecoveryPointLifecycle" ], "Resource": "*" } ]}Any ideas what I'm missing here ?**Update ** :A full week after creating the backup recovery point, and still unable to delete it.I tried deleting it from the AWS CLI but no luck.I tried suspending the versioning for the bucket in question and tried, but no luck too.FollowCommentzazik a year agoHave a similar issue:First I created a plan manually for continuous S3 backup just to test how it works. It was using the Default vault.Then, after the test was successful, I deleted the plan together with its resource assignment, rule, and also an automatically created IAM role (AWSBackupDefaultServiceRole). Simply to remove all test artifacts.Then I created a new backup plan via CDK. It targets another (newly created) vault. And uses another (newly created) IAM role.Unfortunately, it fails to create a recovery point - the status is Expired with message saying, that this S3 bucket is already configured for a continuous backup in another vault.So, I tried to remove the recovery point, created by a manually created backup plan before. And it fails with the same error as you have described. I've tried multiple things to overcome the problem, but none of them helped:adding an explicit policy to the vault to allow recovery points deletionadding AWSBackupFullAccess policy to my IAM user"restoring" the default AWSBackupDefaultServiceRole IAM role, that I removed at step 2; and also extending it with a AWSBackupFullAccess permission.This is now a showstopper for me, because the the manually created backup plan has already been deleted, but the new one has not yet started to work properly.SharerePost-User-8589349 a year agoWas a solution for this ever found? What happens after "delete after" date is reached/passed?Share"
Expired s3 Backup Recovery Point
https://repost.aws/questions/QUYA9x3ec2QE26FjQD2w_QWQ/expired-s3-backup-recovery-point
false
"0Dear User,Thanks for posting a query on re:post.Just wanted to share that the recovery point that is created by AWS Backup cannot be deleted in the console window of the protected resource. You can delete them on the AWS Backup console by selecting them in the vault where they are stored and then choosing Delete.Let us know if this information helped in resolving your problem.Thanks.Regards,nikhilCommentShareAWS-User-8643694answered 5 months ago"
"We have been having an issue where we receive multiple Open Events via SNS. These events happen withing a split second of each other, and all come from cloud watch ip addresses. This doesn't appear to happen for every recipient, but it loooks to be around 50%.Below are events we have received (with sensitive data removed) to give you an idea of what we're seeing.Why are there multiples happening?Why are they in such quick succession?Why are they reporting cloudfront IP addresses?Additional InfoWe are sending from a domain setup in cloudfrontWE only receive a single Sent and Delivered Event.Click events can exhibit this behaviour as wellEvent 1{"eventType":"Open","mail":{"timestamp":"2022-01-31T21:00:01.769Z","source":"xxxxx","sendingAccountId":"xxxxx","messageId":"0108017eb1effb69-5ce6a9ca-e8f4-4d57-9aa8-1d6d55e89433-000000","destination":["xxxxx"],"headersTruncated":false,"headers":[{"name":"From","value":"xxxxx"},{"name":"Date","value":"Tue, 01 Feb 2022 08:00:01 +1100"},{"name":"Subject","value":"xxxxx"},{"name":"Message-Id","value":"<YNOLANVQYFU4.CDP4G8YK0B121@xxxxx>"},{"name":"To","value":"xxxxx"},{"name":"MIME-Version","value":"1.0"},{"name":"Content-Type","value":"multipart/mixed; boundary=\"=-bSeeSpdqGZ87P9P4desJPg==\""}],"commonHeaders":{"from":["xxxxx"],"date":"Tue, 01 Feb 2022 08:00:01 +1100","to":["xxxxxx"],"messageId":"0108017eb1effb69-5ce6a9ca-e8f4-4d57-9aa8-1d6d55e89433-000000","subject":"xxxxxx"},"tags":{"ses:operation":["SendRawEmail"],"ses:configuration-set":["xxxxxx"],"ses:source-ip":["203.55.35.200"],"ses:from-domain":["xxxxxx"],"ses:caller-identity":["xxxxx"]}},"open":{"timestamp":"2022-01-31T21:14:02.524Z","userAgent":"Mozilla/4.0 (compatible; ms-office; MSOffice 16)","ipAddress":"64.252.184.86"}} Event 2{"eventType":"Open","mail":{"timestamp":"2022-01-31T21:00:01.769Z","source":"xxxxx,"sendingAccountId":"xxxxx","messageId":"0108017eb1effb69-5ce6a9ca-e8f4-4d57-9aa8-1d6d55e89433-000000","destination":["xxxxx"],"headersTruncated":false,"headers":[{"name":"From","value":"xxxxx"},{"name":"Date","value":"Tue, 01 Feb 2022 08:00:01 +1100"},{"name":"Subject","value":"xxxxx"},{"name":"Message-Id","value":"<YNOLANVQYFU4.CDP4G8YK0B121@xxxxxx>"},{"name":"To","value":"xxxxxx"},{"name":"MIME-Version","value":"1.0"},{"name":"Content-Type","value":"multipart/mixed; boundary=\"=-bSeeSpdqGZ87P9P4desJPg==\""}],"commonHeaders":{"from":["xxxxxx"],"date":"Tue, 01 Feb 2022 08:00:01 +1100","to":["xxxxxx"],"messageId":"0108017eb1effb69-5ce6a9ca-e8f4-4d57-9aa8-1d6d55e89433-000000","subject":"xxxxx"},"tags":{"ses:operation":["SendRawEmail"],"ses:configuration-set":["xxxxx],"ses:source-ip":["203.55.35.200"],"ses:from-domain":["xxxxx"],"ses:caller-identity":["xxxxx"]}},"open":{"timestamp":"2022-01-31T21:14:02.601Z","userAgent":"Mozilla/4.0 (compatible; ms-office; MSOffice 16)","ipAddress":"64.252.184.77"}} ```~~~~Event 3Much the same, but with timestamp: 2022-01-31T21:14:05.990Zipaddress: 64.252.184.74useragent: Mozilla/4.0 (compatible; ms-office; MSOffice rmj)FollowCommentAWS-User-9819885 a year agoThis is still occurring, any ideas out there?I'm getting flooded with click and open events. In some scenarios a single recipient has 97 opened events, and they all come from cloudfront ip addresses. So I am curious to know what's causing this.SharerePost-User-9648090 6 months agodid you figure out why this is happening? have the same issue...Share"
SES Open Events - Getting Duplicate events from cloudfront IPs
https://repost.aws/questions/QUmb8zX9dHRby8eY-bayka0w/ses-open-events-getting-duplicate-events-from-cloudfront-ips
false
"I'm trying to get region-wise buckets using getBucketLocation() method in AWS S3 SDK, for all the regions it returns proper outputs but for the region us-east-1 it throws the error.Tried few solutions such as changing the bucket name, deleting the bucket etc. but still the error persists.FollowComment"
Getting error - The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'ap-south-1' (Service: Amazon S3; Status Code: 400; Error Code: AuthorizationHeaderMalformed)
https://repost.aws/questions/QUHQVY8pX5RgKISg-FjoeetA/getting-error-the-authorization-header-is-malformed-the-region-us-east-1-is-wrong-expecting-ap-south-1-service-amazon-s3-status-code-400-error-code-authorizationheadermalformed
true
"0Accepted AnswergetBucketLocation() returns a region ID like "af-south-1" for all regions except us-east-1, when it returns null. Are you sure it's getBucketLocation() returning the error rather than something using its output that's getting tripped up by the null?CommentShareEXPERTskinsmananswered 4 months ago0I debugged the code and as per you were saying getBucketLocation() returns "US" for us-east-1, so I was getting that error.Problem has been resolved.Thank you.CommentSharekalyani_wanianswered 4 months ago"
"My image files are automatically converting into octet-stream while uploading to S3 via multer-s3 with node js, API gateway and lambda. It was working fine when my node app was on vercel. What possibly could have gone wrong?FollowComment"
Uploading image file to S3
https://repost.aws/questions/QURsUU8baCSzOXznV5exWAEg/uploading-image-file-to-s3
false
"0Check this out. By default the content type is set to application/octet-stream. If you want multer-s3 to automatically find the content-type of the file, use the multerS3.AUTO_CONTENT_TYPE constant.https://www.npmjs.com/package/multer-s3-v2#setting-custom-content-typeCommentSharesupriya_awsanswered 2 months agorePost-User-7772903 2 months agoHi, I am already using multerS3.AUTO_CONTENT_TYPE and it was working fine until now but it’s not working when I moved the node app to Api gateway and lambda.Share"
"I have a large piece of server software (3 GB of files pre-install) that is running on an EC2. The software installs a full app server or interface server that communicates with the front-end desktop GUIs and database. The software was originally designed years ago to be installed through a visual step-by-step installer off a USB drive on premises. This installer ensures that the software is set up with proper configuration, networking, connection to the database, etc. Every client gets 1 or more EC2 instances dedicated to handle their workload.Moving into a cloud-minded paradigm, what is a better way to handle creating many servers, for many clients, all with different configurations of this software? When a server goes down, or another is needed for load, what's a "cloud" practice to spin up a new server and install the same configuration of software on this server?I have multiple ideas including:Store software files in S3 bucket and pull them to the EC2 instances as necessary. A config file for each customer will also be updated and stored on S3. The EC2 will then start the software from a PowerShell script to create proper configurations.Store the software in the AMI of EC2 exactly as configured. This means any time a server is created with a new client configuration, we create a new AMI after installation.Create a Lambda function that can handle all the different configuration parameters. When invoked, it will take care of spinning up a server, moving the software to the server, and installing the software with proper configuration.Any guidance or references to white papers would be appreciated.Thank you!FollowComment"
What are cloud best practices for installing software on many EC2 instances that can be configured many ways?
https://repost.aws/questions/QUP3C2aGnMR5GOqo-OYHlPUA/what-are-cloud-best-practices-for-installing-software-on-many-ec2-instances-that-can-be-configured-many-ways
false
"0Hi - You can consider to look into the following. One from an auto scaling standpoint and one from bringing up the infrastructure.A launch template is similar to a launch configuration, in that it specifies instance configuration information. It includes the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and other parameters used to launch EC2 instances. For example, you can create a launch template that defines a base configuration without an AMI or user data script. After you create your launch template, you can create a new version and add the AMI and user data that has the latest version of your application for testing . More Reference : https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-templates.htmlDeploying applications on Amazon EC2 with AWS CloudFormation - CloudFormation includes a set of helper scripts (cfn-init, cfn-signal, cfn-get-metadata, and cfn-hup) that are based on cloud-init. You call these helper scripts from your CloudFormation templates to install, configure, and update applications on Amazon EC2 instances that are in the same template. Reference : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/deploying.applications.htmlWith the software you mentioned , yes there would some sort of script or helper to do the configurations for but above ideas may also help.You can also check Distributor, a capability of AWS Systems Manager, helps you package and publish software to AWS Systems Manager managed nodes. Reference : https://docs.aws.amazon.com/systems-manager/latest/userguide/distributor.htmlCommentShareEXPERTAWS-User-Nitinanswered 8 months ago"
I am not able access my resource in two regions and I ma getting API error for emarly any operation.What is the reason?FollowComment
Getting API Error in Ohio and N. Carolina
https://repost.aws/questions/QUOU6kw0fDTkqpA47CrVEZaQ/getting-api-error-in-ohio-and-n-carolina
false
"0Hello!This can be caused by a few things, but primarily revolves around access denied or active event.You can check for active events by checking the Service Health Dashboard https://health.aws.amazon.com/health/homeIf it is a permissions issue it is likely IAM policy [1] or SCP related. If possible (if you have access), a good way to troubleshoot this is to go to your CloudWatch console event history. Here you may filter recent events to EventSource "ec2.amazonaws.com". The API calls from your screenshot will be listed there with their error messages. As an example, one of the API calls is likely "DescribeInstances". If this event error message is AccessDenied, consult your AWS administrator or check your user/role IAM policies and SCPs in your Organizational Unit if you are in an organization. If you are unable to view CloudTrail events either, it is likely a regional Deny policy being applied to either your IAM user/role or the OU.[1] https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html[2] https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.htmlCommentSharerePost-User-1886289answered 12 days ago"
"I've got an interesting issue where occasionally a user is able to create an account with email and get verified with the one time passcode and then loses the email_verified prop. I can prove this is happening with a subsequent call to our api (deconstructed auth token jwt below). Strange thing is, later when we check in on this user we find their email_verified is set to false in the console. Has anyone experienced this before with Cognito?FollowComment"
Cognito email_verified gets set to false
https://repost.aws/questions/QUcfi3D0D2T4CqLTxWyvLagw/cognito-email-verified-gets-set-to-false
false
"Hi!,I post this question regarding an issue that happens to us at random times. Maybe someone experienced similar situation or can give us a clue.We have three windows ec2 running inside vpc with internet access that make requests to some public api gateways. We have noticed that at random times (the last issue was April 26 and the previous time was on April 4), all the requests made from the three instances within a range of 5 minutes failed with the message "The request was aborted: Could not create SSL/TLS secure channel"These requests happens almost every minute every day during 12 o 14 hours so it didn´t fail before or after those 5-10 minutes lapse. Its like something happening at api gateway at random times. Can it be that updating api definition with swagger file or deploying to stage may cause this issue? One of the dates matches.We use us-east-1 region and use as endpoint https://xxxxxx.execute-api.us-east-1.amazonaws.com/stagexxxThe status page doesn´t show any issues on those days for api gateway service.Thanks in advanceFollowComment"
At random times connection from ec2 machine to api gateway returns Could not create SSL/TLS secure channel
https://repost.aws/questions/QUl_gjgH5nSEqhlLXvELF96w/at-random-times-connection-from-ec2-machine-to-api-gateway-returns-could-not-create-ssl-tls-secure-channel
false
"Hi,I am wondering if there is any way that a game server can activate itself without receiving the onStartGameSession event?The scenario I am looking at is roughly like this: I am using a 3rd part matchmaking library; This library allows the game server starts a session and wait for the players. I am planning to make the stand-by game servers create their sessions and wait until a player connects to them; At this point I want to make a call to a GameLift api like ActivateGameSession to tell the GameLift the server is activated so that it adds another game server to stand-by pool.The problem I have here is that in the documentation it is mentioned that the ActivateGameSession should be called in response to the onStartGameSession event.Is the scenario I described possible to implement with the GameLift? If it is, how could I do it? Are there any caveats?Look forward to hearing from you.Regards,AydinFollowComment"
How to Make a Game Server Activate Itself?
https://repost.aws/questions/QUcwMQDy6sQ8yd1NKpBexFhw/how-to-make-a-game-server-activate-itself
false
"0Hi Aydin,It sounds like your use case is to have a holding pool of game servers ready to place matches as soon as the matchmaking library returns a match, is that correct? Have you considered using fleet auto-scaling to scale up your fleet based on utilization?Reference: https://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-autoscaling.htmlCommentShareShashankJ_AWSanswered a year agoEXPERTToni_Sreviewed a year ago"
"It doesn't look like we have the ability to create queues, routing profiles for AWS connect via CFT and CDK. Is that true? Is it possible to call createQueue, createRoutingProfile APIs from CDK to achieve this? Is it necessary to use a custom resource or can the API be called from CDK?Thanks!FollowComment"
AWS Connect: create queues and routing profiles
https://repost.aws/questions/QU_-NY70NfSN6m1hqMvivNqw/aws-connect-create-queues-and-routing-profiles
false
"2Hi there!As we do not have API's available for deleting queues or routing profiles, they are not supported in CFT or CDK. You could use a custom resource to manage queues as you described, with your own logic for deletion (a rename is a good pattern).Our API's to create queues and routing profiles could be called from the stack directly, but this means the state of the resource will not be managed by CloudFormation. A Custom Resource is the suggested approach as you identified above. Happy building!CommentShareEXPERTAWS-samfredanswered 2 months ago1You can call those Create APIs in CDK as Custom Resources, https://catalog.us-east-1.prod.workshops.aws/workshops/d93fec4c-fb0f-4813-ac90-758cb5527f2f/en-US/walkthrough/typescript/sample/source-construct/custom-resourceCommentShareClarenceChoianswered 2 months ago0Is support for delete API on the roadmap? Any timeline for it?CommentSharerePost-User-2260698answered 2 months agoAWS-samfred EXPERT2 months agoHi there, you can reach out to your AWS Account team for discussions about items on the roadmap.Share0Hello,Currently creating queues and routing profile using Cloudformation or calling the API in CDK is not supported in Amazon Connect. As mentioned by you the only way to do so is using a custom resource.Also, the AWS Connect team currently has an active feature request for enabling the capability of deleting a queue/routing profile. However, there is no ETA on when this will be available. That being said, I can assure you that our team takes customer feedback very seriously and work meticulously to provide a good customer experience. Further, you could keep an eye on our what's new blog for information regarding new feature releases [1][2][3].[1] AWS Blog - https://aws.amazon.com/blogs/aws/[2] What's New with AWS : https://aws.amazon.com/about-aws/whats-new/customer-engagement/?whats-new-content.sort-by=item.additionalFields.postDateTime&whats-new-content.sort-order=desc&awsf.whats-new-products=general-products%23amazon-connect[3] Connect Announcements : https://forums.aws.amazon.com/forum.jspa?forumID=249CommentSharerePost-User-1233745answered 2 months ago"
"How can I send a message to my IOT device which is connected to AWS IOT core, through Grafana?I am already receiving data from my IOT device:IOT device ---> IOT core --->Amazon message rule-----> Amazon DataStream --->Amazon GRAFANAHow can I reverse this operation;Amazon Grafana---> IOT rule ---->send Mqtt message to IOT deviceFollowComment"
Grafana to send mqtt messages through aws iot core
https://repost.aws/questions/QUATmA-J57SFWDWlRfqO8Auw/grafana-to-send-mqtt-messages-through-aws-iot-core
false
"0Amazon Managed Grafana is a fully managed data visualization service, and while I don't want to say it's not possible to send data from Grafana to AWS IoT devices, I think I'd like to better understand the use case.Can you describe the outcome you're looking to achieve in this scenario? What type of message would you want to source from Grafana and send to your IoT device? Is this IoT device running FreeRTOS or AWS IoT Greengrass (or using the AWS IoT SDK)?CommentShareDarren Ranswered 3 months agorePost-User-6095414 3 months agoI just want to send command to start and stop a motor , and to reset alarmsShare0Hi.Amazon Grafana---> IOT rule ---->send MqttThe rules engine would not likely be involved for downlink. More likely you would use an SDK and call the HTTP iot-data publish API. For example, with boto3: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iot-data/client/publish.htmlAmazon Managed Grafana doesn't yet support third party plugins so you, can't for example, install a button panel like this: https://grafana.com/grafana/plugins/cloudspout-button-panel/. If you would like to see this supported, please make your voice heard here: https://github.com/aws/amazon-managed-grafana-roadmap/issues. A workaround is to perhaps spin up your own Grafana in EC2 and install a button panel.It should be possible however to use AMG to send automated messages in response to alarm conditions. If an AMG alert triggers a notification on SNS, a Lambda could be used to process the SNS message and publish back to the device.CommentShareEXPERTGreg_Banswered 3 months ago"
"Hi,We are noticing an issue where our automated UI tests are failing on AWS Codepipeline because a table on one of our application webpages fails to load ONLY on AWS codepipeline. The same tests pass on local where the table loads without any issues. Upon investigating, it looks like the API that needs to be called for the table to load is being blocked on AWS codepipeline. Is there a way to disable/enable APIs on Codepipeline? We would greatly appreciate it if someone could please point us to the AWS settings to enable/disable APIs as it would help us further investigate and resolve this issue.Thanks,UmairFollowComment"
Disable/Enable APIs on AWS Codepipeline for test automation runs
https://repost.aws/questions/QUozKUoH55Qe6Ho2J19qJSGg/disable-enable-apis-on-aws-codepipeline-for-test-automation-runs
false
"0Hello Umair,Thank you for your query.Regarding your issue, can you please let me know which API call are you talking about. If this API call is for any IAM related permission, you can ALLOW/DENY it using the Code Pipeline service role from the IAM console.[] Identity and access management for AWS CodePipeline - https://docs.aws.amazon.com/codepipeline/latest/userguide/security-iam.htmlFurther, if the permission is needed at the user level, you can follow below link.[] Grant approval permissions to an IAM user in CodePipeline - https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals-iam-permissions.htmlAlso, since you mentioned that the API call was for loading the table which is failing, can you please let us know if this API call is a web API call? If it is, then we do not need to make changes as suggested above regarding IAM permissions. For a web API call, I would like to mention that CodePipeline does not block web API calls. Therefore you need to check either your build or deploy stage where this API call is getting executed.Further, for troubleshooting specific to your pipeline, I would request you to please reach out to us via AWS Support case so that we can better assist you.Thank you.CommentShareSUPPORT ENGINEERAWS-User-arpitanswered 12 days ago"
"I'm using STM32 L475E_IOT01A2 and I tried to run the sample "Connect to AWS IoT - STM32-B-L475E-IOT01A", however, it always failed to establish the connection.Actually, it always returned secureSocketStatus=-1.Actual message observed on the screen is attached below.1 535 [Tmr Svc] Waiting for 180 seconds before generating key-pair2 180541 [Tmr Svc] WiFi firmware version is: C3.5.2.7.STM3 180546 [Tmr Svc] WiFi firmware is up-to-date.4 180552 [iot_thread] [INFO ][DEMO][180552] ---------STARTING DEMO--------- 5 180560 [iot_thread] [INFO ][INIT][180559] SDK successfully initialized.6 185637 [iot_thread] [INFO ][DEMO][185637] Successfully initialized the demo. Network type for the demo: 17 185647 [iot_thread] [INFO] Creating a TLS connection to a26800ryr2bs98-ats.iot.ap-northeast-1.amazonaws.com:8883.8 185739 [iot_thread] [ERROR] Failed to establish new connection. secureSocketStatus=-1.9 185753 [iot_thread] [WARN] Connection to the broker failed. Attempting connection retry after backoff delay.10 186056 [iot_thread] [INFO] Retry attempt 2 out of maximum retry attempts 5.(I omit following message just indicating repetitions.)I think configuration for aws_clienetcredential_keys.h and aws_clientcredential.h is ok.In aws_clienetcredential_keys.h, keyCLIENT_CERTIFICATE_PEM and keyCLIENT_PRIVATE_KEY_PEM are provided as created by CertificateConfigurator.In aws_clientcredential.h, BROKER_ENDPOINT, IOT_THING_NAME, wifi address and password are set properly.In aws_demo_config, CONFIG_CORE_MQTT_MUTUAL_AUTH_DEMO_ENABLED is defined.Further, I checked and found the problem may exist the handshake of ES_WIFI_StartClientConnection. (Observed handshake sequence is below.)Cmd:P0=0 -> ret=0,Cmd:P1=3 -> ret=0,Cmd:P2=0 -> ret=0,Cmd:P3=(remote IP address) -> ret=0,Now, ES_WIFI_STATUS & TCP_SSL_CONNECTION are ok.Cmd:P9=2 -> ret=0,Cmd:P6=1 -> ret=5, which I think means UNEXPECTED_CLOSED_SOCKETI think it leads to ecureSocketStatus=-1.I repeated many times, however, the result was always the same.Please let me know how to solve this.As I'm really a beginner, your instruction would be highly appreciated.regards,CKAdditonal Information:I downloaded latest sample module from AWS site.Also, I updated wifi firmware module(SPI_C3.5.2.7) through Inventek website.FollowComment"
FreeRtos sample for STM32_L475E_IOT01A cannot establish connection
https://repost.aws/questions/QUN_-EZJlsSKK7vlyZsUtLnw/freertos-sample-for-stm32-l475e-iot01a-cannot-establish-connection
false
"0Hi CK. I have this particular board. Yes, the WiFi socket connection is being closed unexpectedly. What have you setup in IoT Core? I suspect the issue is probably in the cloud configuration. Is the device certificate registered and activated? Do you have an IoT policy attached to it that permits connection? Could you perhaps share the policy here?I'm not clear what instructions you're following from the name "Connect to AWS IoT - STM32-B-L475E-IOT01A". Can you please paste the link?UPDATE: the logs show that a new cert/key is being auto provisioned after 180s. These values will be used in preference to the values configured in aws_clientcredential_keys.h. Therefore the device is attempting connection with a different certificate than the one that has been registered in IoT Core. Hence the failure. I tested the demo on my board - it works fine and does not do the auto provisioning by default. Please check your build settings (as detailed in the comments) and re-build your binary.CommentShareEXPERTGreg_Banswered 10 months agockpinetree 10 months agoHi, Greg_B,Thank you for your comment.I actually followed below.https://ap-northeast-1.console.aws.amazon.com/iot/home?region=ap-northeast-1#/freertos/clone/Connect_to_AWS_IoT_-_STM32-B-L475E-IOT01A/predefinedI think my device certificate is ok.I attached very simple policy as below.{"Version": "2012-10-17","Statement": [{"Effect": "Allow","Action": "iot:","Resource": ""}]}If you need further information, please let me know.Thanking you,CKShareGreg_B EXPERT9 months agoThanks CK. Some asterisks are missing, but I think that might just be a re:Post formatting issue. So I think it's good.I just noticed the "Waiting for 180 seconds before generating key-pair" line in the log. This indicates developer mode provisioning is enabled (meaning you have keyprovisioningFORCE_GENERATE_NEW_KEY_PAIR set to 1). What option did you use? Option 1 or option 2 (or maybe a bit of both)? https://docs.aws.amazon.com/freertos/latest/userguide/dev-mode-key-provisioning.htmlShareckpinetree 9 months agoHi, Greg_B,I followed Option 1.I imported private key from AWS IoT and configure aws_clienetcredential_keys.h by CertificateConfigurator.Also, I checked the file: aws_dev_mode_key_provisioning.c, as below.#define keyprovisioningFORCE_GENERATE_NEW_KEY_PAIR 0I think it remains as default setting, not choosing option 2.I'm not sure why the message indicates as if option 2 is chosen.Pls let me know if any other setting is required.Regards,CKShareGreg_B EXPERT9 months agoHi CK. Please see here: https://github.com/aws/amazon-freertos/blob/747f07402a744ec839ed9950e841142408abd6b0/demos/dev_mode_key_provisioning/src/aws_dev_mode_key_provisioning.c#L1188-L1217This is why I said what I said. Can you please also check the value of pkcs11configIMPORT_PRIVATE_KEYS_SUPPORTED. It should be 1.ShareGreg_B EXPERT9 months agoBased on the logs, the code is also built with USE_OFFLOAD_SSL. This means the key and certificate are stored in the WiFi module (via PKCS11). See here: https://github.com/aws/amazon-freertos/blob/747f07402a744ec839ed9950e841142408abd6b0/demos/dev_mode_key_provisioning/src/aws_dev_mode_key_provisioning.c#L1320It should use the key and cert from aws_clientcredential_keys.h when configured. However, automatic provisioning is seemingly occurring after 180s and this overwrites the key and cert in the WiFi module. So they don't match the cert in IoT Core when you try to connect.ShareShow 3 more comments"