Description
stringlengths
6
76.5k
Question
stringlengths
1
202
Link
stringlengths
53
449
Accepted
bool
2 classes
Answer
stringlengths
0
162k
"I would like to pull CVSS scoring info for particular CVEs from alas.aws.amazon.com. I see endpoints like https://alas.aws.amazon.com/cve/html/CVE-2023-25751.html, and am wondering if a /json or /xml or similar endpoint exists. I've searched documentation, tried calling those directly, and looked through network traffic but don't see any requests that indicate those exist. Is there an API for this site (or another resource I'm not aware of), or do I need to parse the HTML to get that CVSS data?FollowComment"
Is there an API for the Amazon Linux Security Center (alas.aws.amazon.com)?
https://repost.aws/questions/QUinjA-jqiRy6YVk46QFqT9g/is-there-an-api-for-the-amazon-linux-security-center-alas-aws-amazon-com
false
"1alas.aws.amazon.com does not provide an official API for accessing CVSS scoring information for CVEs in JSON, XML, or other structured formats. However, there are alternative sources you can use to obtain CVSS scores for specific CVEs.One option is to use the National Vulnerability Database (NVD) API, which is maintained by the U.S. government. The NVD API provides CVSS scores and other vulnerability information in JSON format. You can query the NVD API for a specific CVE like this:https://services.nvd.nist.gov/rest/json/cve/1.0/CVE-2023-25751CommentShareEXPERTsdtslmnanswered 2 months ago1Hello,Greetings of the day!! Thank you for contacting AWS.I understand that you wish to know if there are any APIs you can use to retrieve CVSS scoring info for particular CVEs from alas.aws.amazon.com. in the form of JSON, XML.To answer your query, unfortunately there are no APIs that you can use to gather data from the ALAS website. The APIs that AWS provides are specific to services which are used for management and data plane operations specific to your AWS Account. You would have to use conventional tools such as scrapers to gather information from publicly hosted websites.For more information regarding deploying a Server-less Architecture for a Web Scraping Solution on AWS, kindly refer the [document](https://aws.amazon.com/blogs/architecture/serverless-architecture-for-a-web-scraping-solution/)Hope the above information has been helpful.Thank you and Wish you a good day ahead!CommentShareSUPPORT ENGINEERAditi_Banswered 2 months ago"
"I need some way to prompt the agent through the Contact Control Panel before they place an outbound call. Is there anyway that I could extend the CCP or provide an UI for the agent to input information prior to making an outbound call? I have looked into creating tasks in the call flow but I did not find anyway that we could provide a UI using tasks in the call flow. I also looked into the amazon connect stream api, while this does allow us to create an UI for the agent. I did not find a way to use it to inside the CCP itself.FollowComment"
Get input from agent before placing an outbound call
https://repost.aws/questions/QUt6KcrEf3QSCoGYGni5zyWA/get-input-from-agent-before-placing-an-outbound-call
false
"0I'm not sure I follow what you're asking. Do you need the agent to enter some DTMF or are you asking them to fill out some form? Assuming it's the form, you could create a form with then launches the CCP to place the outbound calll, but this isn't really a Connect issue, but more of something which should be done in your CRM or similar software.CommentSharedmaciasanswered a year ago0The CCP is not customizable in anyway. You need to use the Streams API to do any customisation of an agent softphone.With the Streams API you embed the CCP. You can then either hide the CCP to make a fully custom softphone or show it and use that for the normal softphone functions. If you show the CCP it is still not possible to alter anything within its frame.CommentShareledgeanswered a year ago"
"I'm looking at a few articles where the author describes how to route traffic from an AWS API Gateway to Fargate tasks without any load balancing.https://medium.com/@chetlo/ecs-fargate-docker-container-securely-hosted-behind-api-gateway-using-terraform-10d4963b65a3https://medium.com/@toddrosner/ecs-service-discovery-1366b8a75ad6The solution appear to rely on AWS Service Discovery which, from what I can tell, creates private DNS records.If my ECS services starts 3 Fargate tasks, is API Gateway smart enough to spread the traffic across all the 3 tasks or not?FollowCommentvitaly_il 12 days agoHi,I'm thinking about the same choice - leave ALB between API Gateway and ECS Fargate tasks, or try AWS Service Discovery. I'm curious if you have some insights here.Share"
Is running Fargate and API Gateway without a load balancer in the production environment a bad idea?
https://repost.aws/questions/QUfhuXtV8KRceDuC5fH_oMwA/is-running-fargate-and-api-gateway-without-a-load-balancer-in-the-production-environment-a-bad-idea
false
"0No, AWS API Gateway does not automatically spread traffic across multiple Fargate tasks. It only routes traffic to one target. To distribute traffic across multiple tasks, you need to use a load balancer, such as an Application Load Balancer or a Network Load Balancer, in front of the tasks.CommentShareDivyam Chauhananswered 4 months ago0Running Fargate and API Gateway without a load balancer in a production environment can have potential drawbacks and risks. Without a load balancer, your service may not have automatic failover, scaling, or management of incoming traffic. It can also increase the risk of downtime and decreased performance during traffic spikes. It is recommended to use a load balancer in a production environment to ensure high availability, scalability, and to offload the management of incoming traffic.CommentShareMuhammad Imrananswered 4 months ago"
"Hi, i need to exit from sandbox in AWS SES, i wrote in support case additional information and then several times clicked on "resolve case" and "reopen case". and i think aws can answer for my cases longer. What should i do, i clicked "reopen case" one more time, but message is still hangs and I don't know what to do, when to wait for an answer.Message:More information needed for your production access requestIn order for Amazon SES to continue with this request, some additional information on your email sending is required. We have created a support case for a member of our team to work directly with you."FollowComment"
How long does it takes to get answer from AWS to my support case?
https://repost.aws/questions/QU8nAwC38DR_eKCAcAXYslRQ/how-long-does-it-takes-to-get-answer-from-aws-to-my-support-case
false
"0Hi there,Our Support team provides an initial response to your request within 24 hours. Per our Documentation:In order to prevent our systems from being used to send unsolicited or malicious content, we have to consider each request carefully. If we're able to do so, we'll grant your request within this 24-hour period. However, if we need to obtain additional information from you, it might take longer to resolve your request. We might not be able to grant your request if your use case doesn't align with our policies.You can read more information about moving out of our SES sandbox here:https://docs.aws.amazon.com/ses/latest/dg/request-production-access.htmlOur Support team will respond to your request in the order received. To prevent delays, please refrain from changing the status of your case, which should stay as "reopened" if you're seeking additional assistance and a response from our Support team.The following page has further details on updating, resolving, and reopening your case:https://docs.aws.amazon.com/awssupport/latest/user/monitoring-your-case.htmlThank you for your patience as we work to assist you. Continue to monitor our Support Center for an update:http://go.aws/support-centerBest regards,- Kita B.CommentShareEXPERTAWS Support - Kitaanswered 6 months ago"
"Currently, studying for the SAA-CO2 (soon to be SAA-CO3) Certified Solutions Architect - Associate exam. I am going into my final year of secondary school. I am wondering how beneficial it will be for me to achieve the certification. I would be pretty much unable to apply for a job in the Tech industry yet due to my very limited experience, lack of a degree (university), and age. By the time I finish university and "enter the world of work", the certification will have expired and it will need resitting to obtain the certification again. Is it worth continuing to work towards the certification? If so, how would it be useful to me...FollowComment"
Are AWS certifications actually useful to a younger student?
https://repost.aws/questions/QUcYfTnNILR2CBJ67SioTq0w/are-aws-certifications-actually-useful-to-a-younger-student
false
"1Even though there may not appear to be any tangible benefit in getting certified at secondary schooling level, at times one things leads to another and having smaller incremental wins over time can lead to something big down the road. At the very least, it gives you confidence that your required knowledge is sufficient for you to be able to pass the SAA. I did my SAA about 4 or so years ago and back then I was a Software developer (relatively new to Cloud), fast forward to present time I started at AWS two weeks back. As a rule of thumb, I have always kept an open mind towards any form of learning opportunity and at times the learning opportunity leads to tangible benefits while on other occasions it helps me broaden my learning horizonCommentSharePiyushMattooanswered 10 months ago1Where do you see yourself after graduating ? I believe the answer lies in the same. Why are you doing the certification ? Is it to get the benefits of the knowledge of services or to have a certificate tag on your resume.If its the latter, I would say, do it once you graduate, however if you want to know the services and how they work to get a different perspective towards the cloud, it is quiet helpful as a level 1 certification.Hope this helps !!CommentShareSUPPORT ENGINEERAWS-User-Chiraganswered 10 months ago0I think it's true to say that for the most part, it may not be immediately beneficial to you, at least directly. However, in the future, it will show potential employers that you were willing to go the extra mile over and above the academic qualifications that school prepares you for, and over your contemporaries.It also shows that you have the initiative to learn more than "just" the school curriculum, and a mindset that is geared towards industry from an early age. You're also laying the first foundational blocks of what will be many qualifications you'll attain over your career, and incrementally, each of those will add to your capabilities and therefore net worth.I'm not criticising academic qualifications, but they are often less well orientated towards the practicalities of a job, as by their nature they focus on the fundamentals and foundations needed to be successful, rather than being focussed on specific technology stacks, and the specialisation that inevitably ensues when you take your career down a path narrower and more specialised than that which school prepares you for. As an aside, 30+ years ago, I nearly gave up on my degree in Computer Science after working for a big global employer for a year in my third year, as I enjoyed it way more than the theory. But the theoretical work was important and has paid me back over the years, even though much of my curriculum is no longer useful in the workplace (COBOL, Fortran, 1970's AI), as has the qualification I attained, even though I couldn't see the practical use for much of it at the time. And all of my qualifications since then have opened doors for me too.Personally, I also think it will hold you in good stead as your career grows. In 10,20, 30 years even, you will also be able to look back and know that you achieved an industry qualification that would usually be taken by someone 5 or 10 years+ older when you were still in your teens. Not many people do that, it's unusual, in a really good, positive way. That alone will not only give you a head start when you enter the workplace, as it demonstrates a capability and a willingness to learn. It will stand out on your CV, even though the qualification will have expired by then.It will also give you the personal confidence to tackle things that others may find intimidating. It will help you to foster a mindset of life long learning. Few people actively like exams, but lowering the anxiety around them by taking an industry qualification at a young age may lower your reticence to take exams as you get older, something which many adults find daunting, even in their 20s. Desensitising yourself now to that future anxiety, by your own choice, is effectively removing what for some is a huge barrier to their career as you get older. It encourages an "I can achieve anything I set my mind to" mentality.It's a phrase that sounds a bit of a cliche, but learning doesn't stop with school college or university - they are merely the spring boards to the rest of your career, where you can learn more, specialise and see the bigger picture as a result of continuous learning. I've worked in various areas over the years - development, databases (development and operations), business intelligence, machine learning - and I learn new things every single day, 30+ years on from the start of my career. Not everyone takes that path, as having to learn new things is sometimes viewed as uncomfortable, or taking oneself out of one's comfort zone. Fear of failure is another "adult" trait, which is perhaps reinforced by not trying new things, not having a go at qualifications which might be easily attained.I know numerous, perfectly capable adults of my own age, who don't try new things because they assume they'll fail. That's not something that suddenly happens, it's a mentality that incrementally grows over years of being afraid of taking an exam for fear of failure. The sooner you bop down that fear with a big rubber mallet, the better for yourself !Trying something new, learning new skills, taking exams to test your skills, is the way you grow, learn, get to do new exciting things that might have otherwise have proved impossible. It keeps life and your career interesting and fresh, which in turn keeps your motivation levels high. That's important when you're working for 40+ years, every day needs to be interesting !The only caveat I'd give is if taking the certification is either going to get in the way of your other studies, or cause you financial hardship or unwelcome stress.But if neither of those are issues, I'd encourage anyone with the interest you have at your age to have a go at it, and get the certification as soon as you feel you're ready to take the exam, as my own experiences inform me that it's a positive thing to do and will give you a foundation for learning that you'll not regret in later life. Start now, and never stop learning.From me, it's a definite "Go for it" - what do you have to lose? :-)CommentShareJon Ranswered 9 months ago"
"Hi AWS Community,I got the below shown message from AWS and I would like to use Emailvalidation as I have no clue how the DNS stuff works.The problem is, that I do not have access to the emailsaddresses that the validation emails are sent to (admin@goalplay.com, administrator@goalplay.com, hostmaster@goalplay.com, postmaster@goalplay.com, webmaster@goalplay.com).Does anyone know how I could solve this?It would be of great help.Best regards,MoritzGreetings from Amazon Web Services,You have an AWS Certificate Manager (ACM) SSL/TLS certificate in your AWS account that expires on Sep 02, 2022 at 23:59:59 UTC. That certificate includes the primary domain media.goalplay.com and a total of 1 domains.Certificate identifier: arn:aws:acm:us-east-1:405738972658:certificate/7de2bece-a0d8-4e00-b387-e5e4e781fafeACM was unable to automatically renew your certificate. The domain validation method for this certificate is email validation. This method requires the domain owner or someone authorized by the domain owner to take one of the following actions before Sep 02, 2022 at 23:59:59 UTC. If no action is taken, the certificate will expire, which might cause your website or application to become unreachable.If you can write records into your DNS configuration, you can replace all of your existing email-validated certificates with DNS-validated certificates. After you add a CNAME record to your DNS configuration, ACM can automatically renew your certificate as long as the record remains in place. You can learn more about DNS validation in the ACM User Guide.[1]If you want to continue using email validation to renew this certificate, the domain owners must use the approval link that was sent in a separate validation request email. The validation email is valid for 3 days. ACM customers can resend the validation email after receiving the first notification or any time up until 3 days after the certificate expires. For more information on how to resend a validation email, refer to the ACM User Guide.[2]FollowComment"
Certificate Renewal
https://repost.aws/questions/QULT8Ham1uTWmzBQ94J1iLqw/certificate-renewal
false
"1In case you don't have access to listed emails, you can try option #1 - Adding CNAME entry in DNS (goalplay.com) - if you are using Route53 for your DNS hosting then you will need Route53 access, else your respective DNS hosting provider login.CommentShareAshish_Panswered 9 months ago"
"Hello there,How can I update PHP to 7.4 version or upper?FollowComment"
Cloud9 || PHP version
https://repost.aws/questions/QUdYrWMkY9Th6PoQEs1Ko4gw/cloud9-php-version
false
"1sudo yum -y remove php*sudo amazon-linux-extras | grep phpsudo amazon-linux-extras disable php7.2sudo amazon-linux-extras disable lamp-mariadb10.2-php7.2sudo amazon-linux-extras install -y php8.0[Courtesy: I got these commands from https://greggborodaty.com/amazon-linux-2-upgrading-from-php-7-2-to-php-7-4/]CommentShareEXPERTIndranil Banerjee AWSanswered a year agoIndranil Banerjee AWS EXPERTa year ago@rePost-User-9011840 - If this solved your issue, can you please accept my answer? ThanksShare"
"Something like what's there for Azure - https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-supportedI have searched & closest documents I got were again having links to bunch of other pages! This at best are making referring to these simple details more convoluted & obviously time consuming... Also, referring them offline is a challenge!All I need is list of services, metric names, unit, aggregation methods supported & description of metric in a standard format.FollowComment"
Is there a documentation where I can see consolidated list of all metrics of all services that can be fetched from cloudwatch?
https://repost.aws/questions/QUjxMN3puTSz2Q89DXknPCXw/is-there-a-documentation-where-i-can-see-consolidated-list-of-all-metrics-of-all-services-that-can-be-fetched-from-cloudwatch
false
"2There is no single page on AWS that contains all the possible metrics from every AWS service. The closest thing is probably AWS services that publish CloudWatch metrics which contains a list of all the services that integrate with CloudWatch metrics and a list to their documentation. I noticed that some of those links are actually broken too, which is not a good look.I'm speculating here, but there is probably no such list because of how AWS is organized internally. The CloudWatch team probably hasn't made available a way for all the other service teams to publish their metrics centrally. Conway's Law sums it up nicely:"Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure."Until such a page exists, you'll be stuck reviewing each service's metrics in isolation I'm afraid.CommentShareEXPERTbwhaleyanswered 7 months agoEXPERTChris_Greviewed 7 months agoVinay B K 7 months agoThanks for your reply. Much appreciated!I tried to download PDF from the link that you have shared hoping that it might have some consolidated list. It has that, but spread across so much that it's hardly usable.Keeping this thread alive for some more time, just in case... Thanks!ShareChris_G EXPERT7 months agoUnfortunately there is no consolidated list because each service team maintains their own lists of CloudWatch metrics. On the link that was posted here, you can go to each of the service's specific metrics pages to build a consolidated list.Share"
"In AWS EC2 when Launching an Instance from Template the search of the images does not find the image I want as well as the search in the Images page, I have to delete a chunk of the image name for it to find it, which takes up more time than it should and triggers me deeply.Have the AWS naughty devs caused this pain to somebody else?Any leads on how to fix it, who else we can ask or who else's communication platforms we can slap?Thanks!FollowComment"
AWS EC2 Issue Image selection when Launching an Instance from Template
https://repost.aws/questions/QUrqwZHCSoRXSsFLDeMTm6Xw/aws-ec2-issue-image-selection-when-launching-an-instance-from-template
false
"1One thing you can try is to search for the image using a different keyword or phrase. Alternatively, you could try using filters to narrow down your search results. For example, you could filter by the operating system, architecture, or storage type.If these options don't work, I would recommend reaching out to AWS support to report the issue and ask for assistance. You can do this through the AWS Management Console by clicking on the "Support" link in the top right corner of the screen. Alternatively, you can submit a support request through the AWS Support Centre.CommentShareAWS_Guyanswered 2 months ago"
"Hi team,Please kindly read carefully the question before answering Thank you :)I previously asked this question, but the responses I received were not relevant.I followed this blog,to use IAM role anywhere to consume AWS services, I was able to do AWS API calls from my machine via CLI and credential helper.But my concrete use case is to deploy from an Azure DevOps instance to ASEA AWS account: for example, deploy from an Azure DevOps to AWS ECS.I'm unsure how to use this concept of an IAM role anywhere to deploy from Azure DevOps to AWS ECS.can anyone help me with links or blogs on how can I implement this use case and integrate Azure With AWS IAM role anywhere or if there is any other way to do it?==> Deploy an artifact from Azure DevOps instance to Amazon ECS fargate. Or at least push from Azure to AWS ECRis this the only way to do (IAM role anywhere) to deploy an artifact from Azure DevOps to ECS or ECR or I can use classic IAM role?if I use the concept of IAM role anywhere, where can I install the certificate and the private key inside Azure?azure Pipeline => produce artifact => deploy it to ECS or ECR without using long-term credentialswhat are the steps to do in Azure side so that the Pipeline Azure can push to AWS?Please I know how the IAM role anywhere works and I already configured it in my AWS account, my question here is not about IAM role anywhere,it's about how can Azure DevOps Pipeline push an artifact to ECS or ECR and what are the steps to do on Azure side?once I have the certificate and the private key where i should install them at Azure level ...?thank you so much!appreciate your HelpThank you!FollowComment"
Deploy artifact from AzureDevops to AWS ECS or ECR
https://repost.aws/questions/QUoQ0XbJYDTBCfGIunwLlBhA/deploy-artifact-from-azuredevops-to-aws-ecs-or-ecr
false
"1Hello Jess,I am assuming that you have already looked at the following ways described in this documentation using which you can connect to AWS services from your Azure environment. I am also assuming, none of the above methods suits your objective and hence you are looking at using IAM Roles Anywhere.Disclaimer: I haven't used Azure DevOps pipelines much, and I haven't yet executed all the steps below in my Azure account. Please treat the below steps for your PoC purposes only.Setup the IAM Roles anywhere in your AWS environment based on your requirement and have the certificate and private key ready to be used.Store the certificate and private key from previous step in your Azure Key Vault.In your pipeline step, have the necessary cli's available like aws and aws_signing_helper.In your pipeline step, download the certificate and private key from Azure Key Vault, you might have to write one line code to separate the certificate and private key as per this link.Execute aws_signing_helper cli to get the temporary credentials.Parse the output from the last command using which you can access AWS services as per your role definition.Cleanup the certificate and private key.Please let me know how this works for you. Please also consider any additional security measure so that you are handling the keys in the most secure manner.CommentShareManish Joshianswered 4 days agoJess 4 days agoThank you for the clarification!!Please if I want to use the classic IAM role rolewhat should the IAM role trust policy looks like (Principal section) (I don't have the right to create an IAM user in my AWS account)how can I get credentials form this assumed role in my Azure DevOps Pipeline => using the command line PowerShell?Thank youShare1Hello there,thank you for reaching out.I understand guidance is required to deploy artifacts from Azure DevOps to ECR or all the way to ECS fargate. You had previously explored the use of IAM roles anywhere as a method to have temporary credentials when accessing AWS services from your tasks in Azure.I just want to specify that I do not have much expertise on Azure DevOps as all of the services I work on are AWS services. I did however take the time to dive deeper into this to see if I can be able to share documents that can help in this endeavor.What is necessary to be able to connect with some AWS service from Aure DevOps is the AWS Toolkit for Azure DevOps, it is an addon that can be added for use in Azure DevOps : https://marketplace.visualstudio.com/items?itemName=AmazonWebServices.aws-vsts-toolsThe following are some resources that can help with what you are trying to achieve:1.This AWS workshop gives steps guided by short videos on how to achieve this. I hope this is helpful.https://catalog.us-east-1.prod.workshops.aws/workshops/8a64fbe8-3bbe-4ed9-9868-290c9bb560fe/en-US/700integrate-aws-fargate-with-azure-devopsHere you can see how to create a CI/CD pipeline to deploy containers to a container cluster.You can see how to integrate AWS Fargate with Azure DevOps here.2.This is a slightly more concise article that can be considered but covers only up to pushing to ECR.https://cj-hewett.medium.com/azure-devops-pipelines-build-and-push-a-docker-image-to-aws-ecr-bc0d35f8f126Both solutions involve the use of a AWS service connection in you AzureDevOps environmment. this mostly relies on having an IAM user with the necessary permissions, in this case for ECR or ECS. The use of role can be explored as an alternative to using an IAM role, whether as a classic IAM role, or IAM role anywhere.I hope this this information, especially the workshop is helpful and can provide the much needed guidance on how this can be done.Thanks :)CommentShareBohlale_Manswered 4 days ago"
"Anyone ever tried? (Oh, and, I had no idea what tags to use...)FollowComment"
How would one install Peertube here?
https://repost.aws/questions/QUUfzsCbKPRv-0OzvJYL3Hig/how-would-one-install-peertube-here
false
"Hi, in experimenting with the classic device shadow I have found I can publish to topics like $aws/things/DEVICE_NAME/shadow/update and correctly see the updates as expected if I first subscribed to topics like .../shadow/update/delta or ../shadow/update/documents topics, but I have not been able to figure out how to see the already existing document at the moment I call subscribe. I tried subscribing to a number of different topics but in app code and simply in the MQTT test client but none of the subscriptions return any data until a subsequent publish is done.Is there something obvious I am missing? Or is there maybe a different method to get the already existing data of the device shadow at the time of subscription? Thanks.FollowComment"
Retrieving current state of device shadow on first subscribe
https://repost.aws/questions/QUVUHHcWktRhazSTbgb8s9kA/retrieving-current-state-of-device-shadow-on-first-subscribe
false
"1Hi,Thanks for reaching out. I understand that you would like to retrieve the existing shadow document of a thing. This is certainly possible but I would like to confirm that a subsequent publish is required to get the existing shadow document. This is discussed here in our documentationFirst, there would be a need to subscribe to the topics ShadowTopicPrefix/get/accepted and ShadowTopicPrefix/get/rejected as this is where the shadow document will be sent(or if the shadow cannot be retrieved).Once subscribed to the above topics, there would be a need to publish an empty message to the ShadowTopicPrefix/get topic. The current shadow will then be returned to the ShadowTopicPrefix/get/accepted topic.I've just tried this out and it does work as expected. I recommend trying it out in the MQTT test client in the AWS IoT console first. There is currently no topic which will return the existing shadow after subscribing without any subsequent publish.An alternative to the above method would be using GetThingShadow API -> https://docs.aws.amazon.com/iot/latest/apireference/API_iotdata_GetThingShadow.html. You can see the SDK equivalent of this API at the bottom of the documentation, for example, here is the link for the python boto3 SDKI would recommend the first method since you are already using MQTT to communicate with IoT but just letting you know that the second method exists as well.Please let us know if you may have any other questions.CommentShareSUPPORT ENGINEERRyan_Aanswered a year agosamvimes a year agoThanks a lot! I figured I might need to publish an empty one when just doing a subscribe but hoped maybe a missed something. Is there an equivalent react-native javascript library that calls GetThingShadow or do you recommend possibly making the API call directly when the app starts?Also just curious what the difference between publishing to and subscribing to the /get and /get/accepted topics vs /update and /updated/acceptedShareRyan_A SUPPORT ENGINEERa year agoHi,Thanks for your response. I checked our docs and we have documentation on how you can use the Javascript SDK in React Native -> https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/getting-started-react-native.html.A publish to the /update topic means that you are trying to update the shadow document. If there is a corresponding message to the /updated/accepted this means that the update has been accepted -> https://docs.aws.amazon.com/iot/latest/developerguide/device-shadow-mqtt.html#update-pub-sub-topicOn the other hand, publishing an empty message to /get only retrieves the current shadow without any updates. As mentioned earlier, the current shadow state will be published to the /get/accepted topic.Hope this helps! Let us know if anything is unclear.Share"
"** <b>AWS Update:</b> Keyspaces now supports TTL: https://aws.amazon.com/blogs/database/announcing-amazon-keyspaces-time-to-live-ttl-general-availability/ **I am seeing an internal error returrned from key space when I try to prepare the following statement.createAccessToken = getSession().prepare( "INSERT INTO access_tokens" + "(atokenid, atoken, useruuid, auth, clientid, expiration, time, code, version)" + "VALUES (?,?,?,?,?, ?,?,?,?) IF NOT EXISTS USING TTL ?;"); The issue appears to be with the USING TTL ? since the following prepare doesn't exhibit the issue.createAccessToken = getSession().prepare("INSERT INTO access_tokens" + "(atokenid, useruuid, atoken, auth, clientid, code, expiration, time, version)"+ "VALUES (?,?,?,?,?, ?,?,?,?) USING TTL 200;");Anybody know of a work around that doesn't have me hard code the ttl?Edited by: ArturoAtAWS on Oct 18, 2021 3:03 PMFollowComment"
prepare() server internal error when using TTL
https://repost.aws/questions/QUnxlzuNr6QCOwmNdybhcIcg/prepare-server-internal-error-when-using-ttl
false
0Seems the issue is moot since Keyspaces doesn't support ttl.CommentSharePaulCurtisanswered 3 years ago0Thanks for the question and info! Our roadmap is driven by customer feedback so it's super helpful to get more info on how you are using prepared statements with TTL.CommentShareAWS-User-6425534answered 3 years ago0Launched: https://aws.amazon.com/blogs/database/announcing-amazon-keyspaces-time-to-live-ttl-general-availability/CommentShareAWS-User-6425534answered 2 years ago
"Running a job to fetch data from S3 and write to GCP BQ using Glue BQ connector by AWS.Everything else is fine, but for one table the second runs seems to fail always with below error.First time it runs fine, I have bookmarks enabled to fetch new data added in S3 and write to BQ.It fails with below error on write function.Unable to understand the null pointer exception thrown.Caused by: java.lang.NullPointerExceptionat com.google.cloud.bigquery.connector.common.BigQueryClient.loadDataIntoTable(BigQueryClient.java:532)at com.google.cloud.spark.bigquery.BigQueryWriteHelper.loadDataToBigQuery(BigQueryWriteHelper.scala:87)at com.google.cloud.spark.bigquery.BigQueryWriteHelper.writeDataFrameToBigQuery(BigQueryWriteHelper.scala:66)... 42 moreFollowComment"
Glue job failing with Null Pointer Exception when writing df
https://repost.aws/questions/QUyVbwPX0RSYWXwaAbLEjSng/glue-job-failing-with-null-pointer-exception-when-writing-df
false
"0Hi,if you have bookmark enabled, are you sure you have new data in S3 for the second run?If not the read step will create an empty dataframe that might cause the write to BigQuery to fail.Currently you might want to implement a try/catch or conditional logic to test if the dataframe you read has data and writes to bigquery only if true otherwise only log a message that there is no available input at the moment.Hope this helps,CommentShareEXPERTFabrizio@AWSanswered a year agoAWS-User-2896664 a year agoYes, more data is present in S3, I have printed the data and checked just before writing, but still it is throwing this error. I thought maybe something around nullability of the columns, but have fixed that too, by setting the nullable property of source to True same as target , but still the same error.I am clueless now!Share"
"Hello,I have been able to find most of the technical specifications for Graviton 2, but I'm missing one piece of data. Does anyone know how many double-precision operations per second per cycle are they capable of performing?Thanks.Edited by: afernandezody on Mar 29, 2020 6:35 PMFollowComment"
Question about tech specs for Graviton 2
https://repost.aws/questions/QUklBhZ9qOSFOaP-xAwgFcsA/question-about-tech-specs-for-graviton-2
false
"0Each core can theoretically do 20GFLOPs of double precision math when using fused-multiply-adds.CommentShareAli_Sanswered 3 years ago0Hello,I am not really sure how you came up with that specific figure. If I assume a frequency of 2.9 GHz, it yields a bit less than 7 (double-precision) FLOPs per cycle. A value of 8 could be expected so my guess is that other limitations decrease the theoretical maximum value per core.Thanks.Edited by: afernandezody on May 13, 2020 6:38 PMCommentShareafodyanswered 3 years ago0The cores each run at 2.5GHz. and can do 4 dual precision multiply accumulates/cycle.CommentShareAli_Sanswered 3 years ago0Not sure why (maybe I read it somewhere) I thought that they run at 2.9 GHz even though the console lists them at 2.5 GHz. Thanks.CommentShareafodyanswered 3 years ago"
"I have two S3 buckets with data tables, namely A and B, and a Glue job, that transforms data from A to B. Both tables contain a column called x. The Glue job performs a GroupBy operation on this column x, which results in transforming all other columns from table A into list type columns for table B. I activate the bookmarking mechanism for the Glue job, so that it processes only new data. That requires, that I also read in inputs from table B (which are outputs of the previous run of the Glue job) in this job and append new items to the list type columns in case a record with a specific value for column x already exists. It is unclear for me how I could update the table B when saving outputs of the Glue job and avoid duplicated values of column x. Does anybody have a hint here? Thanks!FollowComment"
Update Records with AWS Glue
https://repost.aws/questions/QUuGsw3KI4RTSPgB6OK-c5iw/update-records-with-aws-glue
false
0Seems like you are reading Source A table and combining that with processed table B. In that case I would say simple overwrite the result in B and combine Table A and B all the time in glue job.CommentShareZahidanswered a year agoThomas Mueller a year agoIn that case I would have to process the entire table B all the time in order not to loose any records (e.g. that are not in the currently processed batch due to bookmarking). For large tables B that does not sound very efficient to me.Share
"I tried to set up an cross-account Athena access. I could see the database in Lake formation, Glue and Athena under target account. At the beginning I don't see any tables in the target Athena console. After I did something in Lake formation console (target account) I could see a table in target Athena console and query it successfully. But I could not see other tables from the same database even I tried many ways. I always got below error even I the gave the KMS access everywhere (both KMS and IAM role) or turn off the kms encryption in Glue. I don't know what is the actual reason.Below is an example of the error message:The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access. (Service: AWSKMS; Status Code: 400; Error Code: AccessDeniedException; Request ID: cb9a754f-fc1c-414d-b526-c43fa96d3c13; Proxy: null) (Service: AWSGlue; Status Code: 400; Error Code: GlueEncryptionException; Request ID: 0c785fdf-e3f7-45b2-9857-e6deddecd6f9; Proxy: null)This query ran against the "xxx_lakehouse" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: b2c74c7e-21ed-4375-8712-cd1579eab9a7.I have already added the permissions pointed out in https://repost.aws/knowledge-center/cross-account-access-denied-error-s3?Does anyone know how to fix the error and see the cross-account tables in Athena? Thank you very much.FollowComment"
"The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access (Query Id: b2c74c7e-21ed-4375-8712-cd1579eab9a7)"
https://repost.aws/questions/QUYv7NASgITu6rIOz9T_TcOg/the-ciphertext-refers-to-a-customer-master-key-that-does-not-exist-does-not-exist-in-this-region-or-you-are-not-allowed-to-access-query-id-b2c74c7e-21ed-4375-8712-cd1579eab9a7
true
"0Accepted AnswerHii,Have you created the relevant resource links in your Lakeformation console of your target account? If not yet done then, please follow the given documentation and set up the shared tables in your target account.In case, both the source s3 bucket and the source table in Glue are encrypted with different KMS keys then permissions must be given to both of the keys. If both belong to different account then you will have to provide both the resource based and Identity based permissions.In my experience, the error you are seeing arises when the Key policy of the KMS key is not properly defined such that it allows cross account access of the key. Thus, please verify it once.It might be better if you reach out to a Premium Support engineer of Security team as they will be able to have a look at your policies and find out the exact root cause of the error.CommentShareSUPPORT ENGINEERChaituanswered 2 months agorePost-User-2681778 2 months agoHi Chaitu, sorry for the late response. I did create the resource links and the key policy was also correctly defined. But it was caused by the KMS key issue because originally my s3 buckets were encrypted with S3-SSE (which does not support cross-account access) and I switched to KMS encryption after I grant the cross account access through lake formation. I finally destroyed the infrastructure and redeployed everything worked. I felt that I should change S3 encryption from S3-SSE to KMS encryption before I implemented the cross-account access. Thank you very much.Share"
I'm need to backup and synchronized customer data and controlWhich AWS backup can do itFollowComment
Synchronized data and backup
https://repost.aws/questions/QU1Q9fL7LZSp-s4l578q-UOQ/synchronized-data-and-backup
false
"I am designing an application and I really haven't worked with lamda earlier. Currently I have service repository pattern implemented and I am running my application in an express http server.Current design pattern -Model - Sequelize models to call the db methods onRepository layer - Communication with the Database through modelsService layer - calls to repository layer and implementation of business logicController layer - Call service layer functions and send json responseNow I want to migrate to AWS lambda functions. I came accross usage of layers in lambda to share the code accross all the functions.Is it good design to put all my repositories, services and models in layers and only instantiate related classes and then invoke the functions from the lambdas ?( So lambdas will work like controllers in my current design)Or should I move my entire business logic to lambda functions? The question is due to a blog that I read where I got to know that with every new deployment of layers I'll have to redeploy all the lambdas.Please help!FollowComment"
Should I keep all of my business logic in layers in AWS lamda functions service?
https://repost.aws/questions/QUAKcn7d5CRHimZZR5Omouiw/should-i-keep-all-of-my-business-logic-in-layers-in-aws-lamda-functions-service
true
"2Accepted AnswerAlthough both can work, my recommendation would be to place all your code into the Lambda function. One of the factors that affects cold start time is package size. The package size incudes all the layers. By having all the code in a layer and including it in all functions, you pay extra cold start times for all functions. You should create smaller functions that each includes only what they need. Also, if you make a change to the business logic in one component, you will need to update all functions, which just adds to complexity.You should use layers for things like common 3rd party libraries that you need to use across all functions. Things like monitoring.CommentShareEXPERTUrianswered 7 months agotejas 7 months agoThank you for the help. I am seeing how over using the layers can cause problems in the production in the future. I am going to have repositories and 3rd party utility functions in layers and business logic in lambda functions.Share1Also would like to add that by default you're limited to five layers per lambda. So if you have a sizeable app (i.e. more than a couple models) you likely won't be able to attach everything you would need to the lambdas that need to do the processing.https://docs.aws.amazon.com/lambda/latest/dg/invocation-layers.htmlCommentSharetherealdakotalanswered 7 months ago"
"i accedently deleted mu amazon connect instance, and i need to restore the did phone number or claim a new one?!FollowComment"
amazon connect restoring phone number
https://repost.aws/questions/QU75xdZbLARqiEPvAYL0Nkow/amazon-connect-restoring-phone-number
true
"1Accepted AnswerHello ,Thank you for posting your question on the AWS repost, my name is Rochak and it will be a pleasure assisting you with this today.I understand you accidentally deleted your Amazon connect instance and you would like to restore the phone number associated with it. Please, let me know if my understanding is incorrect.Please kindly note when you delete an instance, we release its claimed phone number back to inventory. When you call the phone number that is released, you'll get a message that it's not a working phone number. So, you can’t restore a deleted instance or access its settings, data, metrics, and reports. [1]Please kindly note that once your phone number is released from your Amazon Connect instance:•You will no longer be charged for it.•You cannot reclaim the phone number.•Amazon Connect reserves the right to allow it to be claimed by another customer. [2]I hope this helps. If you need further info, let me know in the comments; otherwise I'd appreciate if you mark my answer as "accepted".Kind regards,Rochak from AWSReferences:[1] What happens once the Amazon connect instance is deleted?https://docs.aws.amazon.com/connect/latest/adminguide/delete-connect-instance.html[2] What happens once the phone number is released?https://docs.aws.amazon.com/connect/latest/adminguide/release-phone-number.htmlCommentShareAWS-Rochakanswered 3 months agorePost-User-6014145 3 months agohello and thanks Rochak,that was so helpful, now i understand that I cannot restore the same DID phone number...but how can I claim another phone number so i can receive and make calls?every time I try to claim a phone number i get this message "You've reached the limit of Phone Numbers. To increase limit, contact support."I'll be thankful if you can help.Share0Hello ,Thank you again for the question. I understand you are getting the message "You've reached the limit of Phone Numbers. To increase limit, contact support."As you may already know there are various service quotas for Amazon Connect.[1]The phone number per instance is 5 but also please note that it is possible to get an error message that "You've reached the limit of Phone Numbers," even if it's the first time you've claimed a phone number and all the issues that cause this error message require help from AWS Support to resolve.If you have a support plan with AWS, please open a case with AWS Support and choose the service as “AWS Connect” and one of the engineer from “AWS Connect” will be able to assist you with that. [2]You can also request a quota increase by filling out this form. [3]Please kindly note that I do not have capacity to make adjustment to your quotas at my end.I hope this helps. If you need further info, let me know in the comments; otherwise I'd appreciate if you mark my answer as "accepted".Kind regards,Rochak from AWSReferences:[1] Amazon Connect service quotashttps://docs.aws.amazon.com/connect/latest/adminguide/amazon-connect-service-limits.html#connect-quotas[2] AWS Premium Supporthttps://aws.amazon.com/premiumsupport/[3] Requesting a quota increasehttps://docs.aws.amazon.com/connect/latest/adminguide/amazon-connect-service-limits.htmlCommentShareAWS-Rochakanswered 3 months ago0Hi,Can you confirm what did you delete. is it instance or number ?Best Regards,Ramy HusseinCommentShareRamyHanswered 2 months ago"
"I'm trying to implement Cognito User Pools in my Amplify app via this example (https://docs.amplify.aws/lib/graphqlapi/authz/q/platform/js/#cognito-user-pools). Importing the 'aws-appsync' package yields several errors like:ERROR in ./node_modules/graphql/index.mjs 25:0-49Module not found: Error: Can't resolve './version' in 'C:\Users\...\node_modules\graphql'Did you mean 'version.mjs'?BREAKING CHANGE: The request './version' failed to resolve only because it was resolved as fully specified(probably because the origin is strict EcmaScript Module, e. g. a module with javascript mimetype, a '*.mjs' file, or a '*.js' file where the package.json contains '"type": "module"').The extension in the request is mandatory for it to be fully specified.Add the extension to the request.I can address this with webpack config, but I'd have to eject my entire app from Create React App OR go the craco/react-app-rewired route. Surely there's an easier way?FollowComment"
aws-appsync package breaks create-react-app
https://repost.aws/questions/QUF-oR2sYrRdeqKauWQa1Pew/aws-appsync-package-breaks-create-react-app
false
"Currently on the default dashboard for RDS proxy the DatabaseConnectionsCurrentlySessionPinned metric is always empty despite the fact that hundreds of connections are being pinned.After consulting AWS support we were informed that this metric is actually a snapshot which is quite confusing and not useful for users.The DatabaseConnectionsCurrentlySessionPinned metric in RDS proxy dashboard should be fixed so it actually matches the number of occurrences of connection pinning. We can see the correct metric if we access that via Cloudwatch metrics, the same metric query should be used to populate the default RDS proxy dashboard.FollowComment"
Fix DatabaseConnectionsCurrentlySessionPinned metric on RDS Proxy dashboard
https://repost.aws/questions/QUfPWoiFxmR7-lios5NrFwBA/fix-databaseconnectionscurrentlysessionpinned-metric-on-rds-proxy-dashboard
false
"0Hi Andre de CamargoI understand that on the default dashboard for RDS proxy the DatabaseConnectionsCurrentlySessionPinned metric is always empty despite the fact that hundreds of connections are being pinned. After consulting AWS support we were informed that this metric is actually a snapshot which is quite confusing and not useful for users. Please correct me if my understanding is wrong.The above metrics is a direct result of Dimension set. You can use the following commands to verify that all components of the connection mechanism can communicate with the other components.Examine the proxy itself using the describe-db-proxies command. Also examine the associated target group using the describe-db-proxy-target-groups Check that the details of the targets match the RDS DB instance or Aurora DB cluster that you intend to associate with the proxy[1].Use the following commands.aws rds describe-db-proxies --db-proxy-name $DB_PROXY_NAMEaws rds describe-db-proxy-target-groups --db-proxy-name $DB_PROXY_NAMEYou can change certain settings associated with a proxy after you create the proxy. You do so by modifying the proxy itself, its associated target group, or both. Each proxy has an associated target group[2].I hope the above information is helpful.References:1.https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.troubleshooting.html#rds-proxy-verifying2.https://docs.amazonaws.cn/en_us/AmazonRDS/latest/UserGuide/rds-proxy-managing.html#rds-proxy-modifying-proxyCommentShareNonkululekoanswered a year ago0HiSeems like CloudWatch console you can see the metrics but not from the RDS console. I would suggest raise a feature request or bug report request from the RDS console lower left corner Feedback button.Seems like you also have support plan available, you may raise the same concern through support case as well.As this metrics is a periodic snapshots, using the RDS proxy log to search "The client session was pinned to the database connection" can be another method of seeing the actual pined sessions.CommentShareSUPPORT ENGINEERKevin_Zanswered a year ago"
"I have a cloudfront distribution with two origins. The first is an S3 static-website bucket and the second is an ALB. I also configured an extra behavior (apart from the default) to forward all api requests to the ALB.CNAME - service.example.com**Behavior: **Path Pattern - api/*Cache Policy - CachingDisabledOrigin Request Policy - AllViewerOriginOrigin Path - /api/v1The objective is to fetch https://service.example.com/api/v1/something when I try to access https://service.example.com/api/something.This doesn't work. If I access service.example.com/api/anything the URL does not even get rewritten to service.example.com/api/v1/anythingIs there a CloudFront behavior I'm not aware of that's making me misconfigure this?Edit to add:I enabled ALB access logging and this is how all requests look:https 2022-04-21T08:35:38.496934Z app/example-service/48c3493fa5414f88 65.49.20.66:45416 10.0.2.91:8000 0.001 0.002 0.000 404 404 39 178 "GET https://44.198.88.248:443/ HTTP/1.1" "-" ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 arn:aws:elasticloadbalancing:us-east-1:ACCOUNTID:targetgroup/example-service/1d723b6babea76f0 "Root=1-6261175a-69c4fa13012685" "-" "arn:aws:acm:us-east-1:ACCOUNTID:certificate/1d74321b-[snip]-539" 0 2022-04-21T08:35:38.493000Z "forward" "-" "-" "10.0.2.91:8000" "404" "-" "-"Referring to the syntax of this log line, it seems like "GET https://44.198.88.248:443/ HTTP/1.1" is the requested path. There is no path here even though I requested /api/v1/something/else.FollowComment"
Cloudfront not respecting Origin Path
https://repost.aws/questions/QUVw8J2y20QVq_hmY-WXCVTw/cloudfront-not-respecting-origin-path
false
"0Are you specifying your api origin as an ALB or as an HTTP/S server? If the former (which I think is correct) then wouldn't the constructed origin URL be based on the default ALB domain not your CNAME?CommentShareEXPERTskinsmananswered a year agoyg-2073661 a year agoI specify the API origin as an HTTPS server. If I use the ALB from the drop-down list, Cloudfront tries to connect to the ALB FQDN over HTTPS and fails because the TLS cert is only valid for the CNAME (service.example.com) and not *.elb.aws.amazon.com.So, I have a domain name, say, api-alb.example.com pointing to the ALB (ALIAS-A rec on Route53) and the origin is set to api-alb.example.com over HTTPS. This ensures that the certificate configured at the ALB is valid for the FQDN to which Cloudfront is trying to connect.Share0I have the exact same behaviour but wasn't able to get it to work. Did you have any luck?CommentShareFelipe Pletsanswered a year ago"
"An AWS Partner is designing a data late/warehouse system for a security-conscious customer. The partner has the following questions related to designing a Redshift data warehouse and related ETL pipelines. I wanted to seek your help to validate my understanding on the following:1. In general, what is the recommended approach to do incremental loads/CDC from source systems to Amazon Redshift? 2. Is it recommended to use AWS Glue to also export data from Amazon Redshift (hot data) to Amazon S3 (cold data)?3. When do we need to run updates to the AWS Glue data catalogs? Is it only when the source table definition is changed? Any other scenarios?4. Is there a scheduling / workflow mechanism to define loading dependencies into Amazon Redshift?5. Is there a mechanism in Amazon Redshift to allow for retries if the data load fails?  E.g. dimension data needs to be loaded completely before fact data table loads starts? Is AWS Glue workflow the suggested tool here as well?6. Can Tableau be used to query data from both Amazon Redshift and Amazon Redshift Spectrum at the same time?7. Would there be any recommended sharable resource, such as a presentation with best practices and approach for designing a data warehouse in AWS?Any insights on any the above would be highly appreciated.Thank you.FollowComment"
Redshift data warehouse and Glue ETL design recommendations
https://repost.aws/questions/QU9ykNVvWuQ1WRJ19SEr7f8w/redshift-data-warehouse-and-glue-etl-design-recommendations
true
"0Accepted Answer1.In general, what is the recommended approach to do incremental loads/CDC from source systems to AWS Redshift?It depends on the type of source system. For relational databases DMS can continuously replicate changed data. For files on S3, Glue jobs have a Bookmark feature that stores a marker of the most recently loaded data e.g. a timestamp or primary key value.2.Is it recommended to use AWS Glue to also export data from Redshift (hot data) to S3 (cold data)?There are two main methods to export data from Redshift to S3. Use the UNLOAD command, or INSERT data into S3 using a Spectrum external table.3.When do we need to run updates to the Glue data catalogs? Is it only when the source table definition is changed? Any other scenarios?Glue Crawlers can be run to update metadata in the Glue Data Catalog when the structure of a table has changed e.g. a column is added/dropped, and also when partitions have been added to the table. The Glue API can also be used for this purpose, and doesn’t incur the Crawler cost.4.Is there a scheduling / workflow mechanism to define loading dependencies into Redshift?This can be done in a number of ways. Step Functions and Glue Workflows can be used. Also, Redshift now has a built in scheduler.5.Is there a mechanism in Redshift to allow for retries if the data load fails? E.g. dimension data needs to be loaded completely before fact data table loads starts? Is AWS Glue workflow the suggested tool here as well?An orchestration tool can be used for this purpose e.g. Step Functions or Glue Workflows. An alternate, if the transformations are built with stored procedures, is to orchestrate individual loading procedures (for dimensions or facts) with a central loading procedure.6.Can Tableau be used to query data from both Redshift and Redshift spectrum at the same time?Yes. S3 objects are exposed in Spectrum via external tables in Redshift. Redshift external tables appear and behave just like normal internal tables. Views can be created that join internal tables with external tables and return hot (internal) and cold (external) data in a single view.7.Would there be any recommended sharable resource, such as a presentation with best practices and approach for designing a data warehouse in AWS?The Redshift Database Developer Guide documentation has a section that is useful for low level Redshift best practices.CommentShareAdam_Ganswered 3 years ago"
"I run My App in Private Subnet and add Oauth2 Login without NAT Gateway.To do Oauth login, I consider setting proxy server like nginx or squid in Public Subnet.Here, I would like to ask a question because I am confused about the concept of Proxy.Is Oauth2 Login possible with Forward Proxy? Or should I use Reverse Proxy?Can nginx and squid be run as Forward Proxy and Reverse Proxy at the same time?Is there a way to do Oauth2 Login without NAT Gateway instead of Proxy?FollowComment"
App Oauth2 Login in Private Subnet without NAT Gateway
https://repost.aws/questions/QUauXfx5KUTOWaoKKOM-GasA/app-oauth2-login-in-private-subnet-without-nat-gateway
false
"Hello,We want to use the AWS SES service, in the price description we saw that a fee of $1250 per month is charged for the deliverability panel. We do not need this panel.Can I opt out? How can we do it?If we request production access, will we be charged for the deliverability panel?FollowComment"
AWS SES Deliverability Dashboard
https://repost.aws/questions/QUueWMuMg_RxuE6irkkd6arA/aws-ses-deliverability-dashboard
false
"1Hello,Short answer: yes and no.Deliverability dashboard for AWS SES is an "Additional service" and not enabled by default. These are the additional services provide by SES:Dedicated IP addressesBring Your Own IP Addresses (BYOIP)Deliverability DashboardYou can opt-in or opt-out for these additional services when ever you want.Basic charge for SES only includes (pricing may vary by regions):Email messages: You pay $0.10 for every 1,000 emails you send or receive.Outgoing data charges: You pay $0.12 per gigabyte (GB) of data in the messages you send. This data includes headers, message content (including text and images), and attachments.Incoming mail chunks: An incoming mail chunk is 256 kilobytes (KB) of incoming data, including headers, message content (text and images), and attachments. When you use Amazon SES to receive email, you pay $0.09 for every 1,000 incoming mail chunks.You can find more information about SES pricing here: https://aws.amazon.com/ses/pricing/Regards,FrankyCommentShareFranky Chenanswered a year agoAlessio Dal Bianco 7 months agoHello Franky, but by enable the "Virtual deliverability manager" will "Deliverability Dashboard" getting activated automatically? (I hope not)The problem is that there is no clear way to know if this dashboard will be activated or not.Share"
"I would like to enable GuardDuty via Organisations, and would like to know whether the existing member accounts on the main administrative account (by invitation) switch to 'enabled via Organisations' automatically.Also, can I designate a GuardDuty administrator account from outside the organisation (being from a separate organisation).FollowComment"
Enabling GuardDuty via Organisations
https://repost.aws/questions/QUXAlaV5seR6WYR4RPlRVr7g/enabling-guardduty-via-organisations
true
"2Accepted AnswerIf you have already set up a GuardDuty administrator with associated member accounts by invitation, and the member accounts are part of the same organization, their Type changes from by Invitation to via Organizations when you set a GuardDuty delegated administrator for your organization.If the new delegated administrator previously added members by invitation that are not part of the same organization, their Type is by Invitation. In both cases, these previously added accounts are member accounts to the organization's GuardDuty delegated administrator.You cannot designate an account outside of your organization as a GuardDuty administrator account.CommentShareAlex_AWSanswered a year agoEXPERTLuca_Ireviewed a year agoAyse Vlok a year agoThank you!Share0You cannot delegate Admin to an account outside of your organization as it uses Org based roles for access across the accounts.You can find more information about enabling org wide integration in the AWS docs: https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_organizations.htmlCommentShareEXPERTRob_Hanswered a year agoAyse Vlok a year agoThank you!Share"
"Hello everybody,I am trying to start a docker service on aws beanstalk by mounting an EFS filesystem.I keep getting the error:Failed to resolve "fs-XXXXXX.efs.eu-west-1.amazonaws.com" - check that your file system ID is correct, and ensure that the VPC has an EFS mount target for this file system ID.Attempting to lookup mount target ip address using botocore. Failed to import necessary dependency botocore, please install botocore first.ERROR: Mount command failed!I made the release with a zip file containingdockerfileDockerrun.aws.jsonapplication.jar.ebextensionscontaining: storage-efs-mountfilesystem.configI set the EFS subnet by opening port 2049 in inbound.and I also tried with the default vpc which has everything openThanks for your supportFollowComment"
Docker: Elastic Beanstalk with EFS filesystem
https://repost.aws/questions/QUOlvrRXxSTcO8lyzweBsSdg/docker-elastic-beanstalk-with-efs-filesystem
false
"Hey,I've been playing around the public Video On Demand stack available at https://github.com/aws-solutions/video-on-demand-on-aws. Successfully tested a few .mp4 video files, however ran into a really strange issue for a larger .mp4 video file (14GB). The MediaConvert Job is stuck in TRANSCODING status at 90% of progress and don't fully understand why. What I noticed is that the retry count increases. I don't know how to find attached logs for the job, is it even possible to review them?Any idea or suggestion how to discover the issue?FollowComment"
[MediaConvert] Job has stuck at 90%
https://repost.aws/questions/QUK8Ra9s1qTKq5-gffCPjGkw/mediaconvert-job-has-stuck-at-90
false
"0Hi there,The primary means to monitor MediaConvert jobs is through the job status, CloudWatch Events, and/or CloudWatch Metrics. Unfortunately MediaConvert does not send log information to your CloudWatch Logs. If the job stays in the PROGRESSING state for more than 48 hours, the job will progress to the ERROR state, and you can use the provided error code along with this table to determine more information on the cause of the issue: https://docs.aws.amazon.com/mediaconvert/latest/ug/mediaconvert_error_codes.htmlIf the job results in an error or continues to remain stuck, I would also recommend capturing the job ID and opening a support case so that an engineer can investigate the job details.CommentShareSUPPORT ENGINEERMichael_Lanswered a year ago"
We are using the Windows AWS Workspaces client to connect to a Linux Workspaces VM. We are finding that whenever we use the "Snipping Tool" app on the local machine (which has the Workspaces client installed) then after a few seconds the Workspaces client will say that its lost connectivity and then disconnect. It is consistently when using the tool and we are having no issues with connectivity / network reliability when we don't use the snipping tool. We are using the latest version of the Workspaces client.Does anyone know why this might be and how to fix it?FollowComment
AWS Workspaces disconnects when using screenshot / snipping tool
https://repost.aws/questions/QUwPL9ScpqTfyp0DqJF2q8Ow/aws-workspaces-disconnects-when-using-screenshot-snipping-tool
false
"0Hi Andrew,Have you enableb advanced logging on the WorkSpaces client windows machine and look for the error message at the time of the crash? This would give some additional details/insight as to what caused the crash.The Windows client logs are stored in the following location:%LOCALAPPDATA%\Amazon Web Services\Amazon WorkSpaces\logsIf you launch a new Amazon Linux WorkSpaces from the Amazon Public/Default Bundle, can you reproduce the issue? If not, it could be related to an out of date package/binary.Do you know what version of Amazon Linux WorkSpace you are running and if the OS has been updated to the latest?You can try to reboot the Linux WorkSpaces and wait for 10-15 minutes (to allow any boot up process to run and execute).Then connect to it using the WorkSpaces client, then run update (e.g. sudo yum update) to ensure it has the latest.Once the latest updates are installed, attempt to re-produce the issue.Last, if you cannot progress, then I would suggest opening an AWS Premium Support Case to allow a Premium Support engineer to review the log bundle and assist with further troubleshooting.CommentShareEXPERTDzung_Nanswered a year ago0Thanks for the tip. I checked the log and found this during the time of the error:2022-04-06T14:46:22.548Z 00000000-0000-0000-0000-000000000000 LVL:3 RC:-500 VCHAN_PLUGIN :tera_clpbd ==> Client window lost focus. Requesting clipboard contents from host.2022-04-06T14:46:38.231Z 00000000-0000-0000-0000-000000000000 LVL:2 RC:-505 MGMT_KMP :Dropping a mouse event (overflow) - flushing queue!It repeats this quite a few times.I think I'm taking from this that perhaps the size of the clipboard copy (from the snipping tool) causes the workspace to overload and then disconnect. I'm going to experiment with turning the auto copy off on the snipping tool to see if this fixes it (at least then I know what the problem is!).Any other thoughts / ideas welcomed :DCommentShareAndrewanswered a year ago0I think it is clear this is due to the clipboard overflowing, causing a failure and eventually disconnection.When I turn off the "copy to clipboard" option in the snipping tool, the issue goes away.Is there any fix to this? i.e. preventing Workspaces essentially crashing when dealing with a large clipboard?CommentShareAndrewanswered a year ago"
"In many case, ddos test is one kind of the stress/load test.In the article "Network Stress Test" ((https://aws.amazon.com/tw/ec2/testing/))it's not allow to do simulate ddos test to try the limitation to the service while dstress/load test is OK.What is the difference between ddos test and stress/load test in AWS definition?Are there any clarify explains can provide to me ?FollowComment"
What is main difference between DDoS testing and Stress/load test?
https://repost.aws/questions/QUSG3KiEamRUGABaUqnCe_pg/what-is-main-difference-between-ddos-testing-and-stress-load-test
true
"0Accepted AnswerHello rePost-User-2012324,AWS's definition of a stress test as stated in the document you provided is "when a test sends a large volume of legitimate or test traffic to a specific intended target application" while DDOS tests are defined in the same document as "Tests that purposefully attempt to overwhelm the target and/or infrastructure with packet or connection flooding attacks, reflection and complexity attacks, or other large volumes of traffic are not considered network stress tests but are considered distributed denial of service (DDoS) tests."The main difference between them is that one is purposely attempting to overload the target/infrastructure with attacks. Not legitimate traffic but traffic that is intended to maliciously deny the service of legitimate traffic. Stress/load testing is testing your infrastructure with legitimate traffic that you are expecting and monitoring your environment for how it handles that traffic. There can be some confusion since both can cause denial-of-service but one is done maliciously via attacking methods (syn flooding / amplification / reflection / ICMP flood etc) and the other is testing your target/infrastructure with legitimate and expected traffic to see how your environment can handle it.I hope this provides some clarity!-NickCommentShareNickanswered 10 months ago"
"Is there official documentation showing the regions in which SageMaker AutoPilot is supported? From my understanding, it should work with the SDK wherever SageMaker is supported, while in the no-code mode only where SageMaker Studio is available. Is this true?Thanks!FollowComment"
SageMaker AutoPilot Regions
https://repost.aws/questions/QU_7jk19ozQSeyjTAR_D_hEA/sagemaker-autopilot-regions
true
"0Accepted AnswerSageMaker Autopilot works in all the regions where Amazon SageMaker is available today as noted in this blog post "Amazon SageMaker Autopilot – Automatically Create High-Quality Machine Learning Models With Full Control And Visibility". In addition, Autopilot is also integrated with Amazon SageMaker Studio, which is available in us-east-1, us-east-2, us-west-2 and eu-west-1. For a current list of available regions, please check the AWS Regional Services List.CommentShareAWS-User-4866184answered 3 years ago"
"I have an app that gets a roadway and direction of travel from a user. I then send a ConfirmIntent response asking to confirm that this is the information they want. If a user says 'I want I-235 soutbound' it prompts them to confirm. The Lex documentation states:For example, Amazon Lex wants user confirmation before fulfilling an intent. Instead of a simple "yes" or "no" response, a user might respond with additional information. For example, "yes, but make it a thick crust pizza" or "no, I want to order a drink." Amazon Lex can process such additional information (in these examples, update the crust type slot or change the intent from OrderPizza to OrderDrink).However if I try providing different information like 'yes, but northbound', or 'I-235 northbound' it does not switch the slot values to the newly provided ones. Is the documentation just wrong, or has someone gotten this to work?Here is an example of the ConfirmIntent I send back."dialogAction": { "type": "ConfirmIntent", "message": { "contentType": "SSML", "content": "<speak>Please confirm you want reports on <say-as interpret-as='address'>I-235</say-as>, going south</speak>" }, "intentName": "RouteIdAndDirectionIntent", "slots": { "routeId": "235", "cityName": null, "disambiguatedRouteName": null, "routeDirection": "SB", "routeType": null, "countyName": null } }FollowComment"
ConfirmIntent does not change intent slots with additional user input
https://repost.aws/questions/QU3iHrnJTRRxuUOQ-4VcQejQ/confirmintent-does-not-change-intent-slots-with-additional-user-input
false
"Hi all,So I've been developing POCs with Ground Truth and A2I, both do not have a VPCConfig in their related boto3 method. If I would like to associate them with a VPC, how should I approach it?Thank you!FollowComment"
Ground Truth in VPC
https://repost.aws/questions/QUQttNFE7fT-mfE9hdGtDwbA/ground-truth-in-vpc
true
"0Accepted AnswerCurrently VPC is not supported, but you can restrict which source IP's are allowed to connect to the Ground Truth job.CommentShareKevinanswered a year agorePost-User-0813700 a year agoThank you for your response, restricting source IP is a great idea!Share"
"I want to create an IAM role that have permission to unload only one schema in redshift, is it achievable?FollowComment"
IAM role that have permission to unload only one schema in redshift
https://repost.aws/questions/QUpxvLQA_gTb6Zewqy7C_Bcw/iam-role-that-have-permission-to-unload-only-one-schema-in-redshift
false
"0It is not possible to create an IAM role with permissions specific to a single schema in Amazon Redshift because IAM roles are used to manage permissions for AWS services, and Redshift uses its own internal permission management system for granting access to its objects (databases, schemas, tables, etc.).To achieve what you're looking for, you should create a new Redshift user with limited permissions and then create a temporary IAM role or credentials that the user can assume to access the necessary S3 bucket for the UNLOAD operation.CommentShareEXPERTsdtslmnanswered 2 months ago"
"We need to establish a "Web socket connection" to our AWS servers using Django, Django channels, Redis, and Daphne Nginx Config.Currently local and on-premises config is configured properly and needs help in configuring the same communication with the staging server.We tried adding the above config to our servers but got an error of access denied with response code 403 from the server for web socket request.below is the Nginx config for stagingserver { listen 80; server_name domain_name.com domain_name_2.com; root /var/www/services/project_name_frontend/; index index.html; location ~ ^/api/ { rewrite ^/api/(.*) /$1 break; proxy_pass http://unix:/var/www/services/enerlly_backend/backend/backend.sock; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_read_timeout 30; proxy_connect_timeout 30; proxy_send_timeout 30; send_timeout 30; proxy_redirect ~^/(.*) $scheme://$host/api/$1; } location /ws { try_files $uri @proxy_to_ws; } location @proxy_to_ws { proxy_pass http://127.0.0.1:8001; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_redirect off; } location ~ ^/admin/ { proxy_pass http://unix:/var/www/services/project_name_backend/backend/backend.sock; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_read_timeout 30; proxy_connect_timeout 30; proxy_send_timeout 30; send_timeout 30; proxy_redirect off; } location /staticfiles/ { alias /var/www/services/project_name_backend/backend/staticfiles/; } location /mediafiles/ { alias /var/www/services/project_name_backend/backend/mediafiles/; } location / { try_files $uri /index.html; }}and Systemctl service to execute Django Daphne service[Unit]Description=Backend Project Django WebSocket daemonAfter=network.target[Service]User=rootGroup=www-dataWorkingDirectory=/var/www/services/project_name_backendExecStart=/home/ubuntu/project_python_venv/bin/python /home/ubuntu/project_python_venv/bin/daphne -b 0.0.0.0 -p 8001 project_name_backend.prod_asgi:application[Install]WantedBy=multi-user.targetBelow is the Load Balancer security group config inbound rulesListner Config for Load BalancerFollowComment"
Django Daphne Websocket Access Denied
https://repost.aws/questions/QUzPAVmH5wSMKU9_kdKbchSQ/django-daphne-websocket-access-denied
false
"Hi,I have 2 X t4g.nano reserved instances, which I like to merge them to create a t4g.micro RI.documentation in https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-modifying.html is catered only for splitting the larger RIs to smaller RIs.what I want is to merge smaller RIs to create a bigger one.ThanksFollowComment"
EC2 Reserved Instances how to merge
https://repost.aws/questions/QUCQ2VePUNQMmcEGIHKh5ghg/ec2-reserved-instances-how-to-merge
true
"1Accepted AnswerIf you have Standard RIs, you can only merge them using modification feature if start & expiry dates on both RIs are the same. If, for example, you bought one .nano RI in April, and another .nano RI in July, they will have different start & end dates, so you would not be able to merge them.If you are using Linux/UNIX RIs with Regional scope and Shared tenancy, you don't need to merge them, because they are size flexible and will be applied to your larger running instance with other matching attributes automatically.However, if both your RIs are Convertible, you can select both of them, and then use the "Exchange" feature to convert them into one bigger instance.CommentShareEXPERTNataliya_Ganswered 9 months agosravan 9 months agoIf you have Standard RIs, you can only merge them using modification feature if start & expiry dates on both RIs are the same. If, for example, you bought one .nano RI in April, and another .nano RI in July, they will have different start & end dates, so you would not be able to merge them.this clears why I'm unable to merge. my 2 nano RIs have different start and end dates.thanks for the helpShare0Hi ThereIf you are running Linux then you don't have to do anything, the two t4g.nano RI's will apply to your t4g.micro instance. Please see "Instance Size Flexibility" in the documentation herehttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/apply_ri.html#ri-instance-size-flexibilityYou can also merge two or more Reserved Instances into a single Reserved Instance. For example, if you have four t2.small Reserved Instances of one instance each, you can merge them to create one t2.large Reserved Instance. For more information, see Support for modifying instance sizes.CommentShareEXPERTMatt-Banswered 9 months agosravan 9 months agoThanks Matt-B, for the response.You can also merge two or more Reserved Instances into a single Reserved Instance.I don't find any related option in "Modify Reserved Instances" screen. I only see option to splitShare"
"We have a single VPC that up until now we controlled access to an EC2 instance via firewall rules permitting a static IP from specific company offices. We now have a need to have a freelance developer get access to the EC2 instance, but we do not want to give access to our corporate remote access nor do they have a static IP.I'm attempting to setup client VPN into the VPC. We have gotten as far as connecting, but can't seem to get access to the EC2 instance via SSH (or even ping it).Example IPs:EC2 VPC IP 10.0.0.2EC2 default gateway 10.0.0.1Client VPN IP 10.0.1.2I'm able to ping 10.0.0.1 (the VPC default gateway) from both the EC2 instance and the client VPN, so this gives me the impression that the routing is setup correctly. I cannot ping the EC2 instance 10.0.0.2 from the client VPN. The default route on the EC2 instance is 10.0.0.1, so it shouldn't require any routing back to the client VPN network.We setup ingress rules on the security group associated with the VPC to allow ICMP and SSH from 10.0.1.0/24, but no joy.Any suggestions on where to look next?FollowComment"
Client VPN access to VPC
https://repost.aws/questions/QUWHL0wKAfRFC2vQ9FEVL2rA/client-vpn-access-to-vpc
false
0Disregard. Turns out I had to add the VPC subnet to the ingress rules because the client VPN is NATed.CommentSharemorleyzanswered 2 years ago
"Guys, how do I point my domain to the subdirectory which is my Wordpress on my LAMP stack and not the LAMP directory...FollowComment"
Pointing domain to a sub directory - Wordpress on LAMP
https://repost.aws/questions/QUnIuF5IZlSt6Uque1tgVHcQ/pointing-domain-to-a-sub-directory-wordpress-on-lamp
false
0Virtual Host was configured. thank you allCommentShareanavrinanswered 4 years ago
"We have a use case of migrating data from AWS DocumentDB to MySQL DB. In the target DB, we have an existing table where we want this ongoing migration to happen. Is this possible or does DMS only support migration to a brand new table in the target DB?FollowComment"
Does AWS DMS only support migration to brand new tables in the target DB?
https://repost.aws/questions/QUd8wdL1gmTtGeOLrINjPB-w/does-aws-dms-only-support-migration-to-brand-new-tables-in-the-target-db
false
"1Per the documentation, there is no limitation on the target table in a MySQL database destination.Some of the limitations that are present are:The data definition language (DDL) statements TRUNCATE PARTITION, DROP TABLE, and RENAME TABLE.Using an ALTER TABLE table_name ADD COLUMN column_name statement to add columns to the beginning or the middle of a table.Aurora Serverless is available as a target for Amazon Aurora version 1.Using an Aurora Reader endpoint.Since it sounds like you are doing continuous replication, also take a look at the Ongoing replication task documentation.CommentShareCarlo Mencarellianswered 5 months ago"
"I created 3 EC2 instances within 3 subnets. Meanwhile, I created 3 network interfaces in the subnets for the 3 EC2 instances. I terminated the 3 EC2 instances. However, the 3 subnets and network interfaces can not be deleted because their status is in use.How to remove them?FollowComment"
How to remove network interfaces?
https://repost.aws/questions/QUc77-e89aRmCSxC2s2hdcvA/how-to-remove-network-interfaces
false
"1Does this help? https://repost.aws/questions/QUsua3vTNJTEC2bE5JoqS8xg/cannot-delete-network-interfaceCommentShareEXPERTskinsmananswered 5 months agorePost-User-2672104 5 months agoIt does not work. When I attempted to detach the network interface, there is an error--The network interface can't be detached, Network interface is in use by another service.Share0Which was the use case for these ENIs? Did you use them to connect to any other service? Can you review in the Network interface ID Under Description you can see which is the service is attached. It seems you have any other service attached to it, first you need to find that service before being able to delete the ENI.CommentShareVicky Gracianoanswered 5 months ago"
"i made an ec2 instance, put an inbound rule for 3306, made a remote user with peivilages,when i try to connect the user in another computer it works but only on the teminal,when i try to connect fron the workbench it gives me the "could not connect to ssh tunnel : authuntecation failed"popup screen.i assured that the keypair is right, the username and pass are right.could really use some assestance.FollowComment"
ec2 instance not connecting to mysql workbench
https://repost.aws/questions/QUzm1DLIbTQGidptFj77Uj2A/ec2-instance-not-connecting-to-mysql-workbench
false
0Both of these articles walk you through how to set this up using Sessions Manager port forwarding.How can I use an SSH tunnel through AWS Systems Manager to access my private VPC resources?Securely connect to an Amazon RDS or Amazon EC2 database instance remotely with your preferred GUICommentShareEXPERTkentradanswered 3 months ago
"I'm having some trouble with a slice not loading after manipulating it in windows.I'm getting the following error when the level is opened: [Error] Slice asset {2A23859A-D8C3-564C-BB43-ECAF41F74D80}:1 not ready or not found! Slice data based on the asset will likely be lost.The level I was working on initially had 1 entity. I created a slice from it in order to create a new entity based on it in the editor. I also performed some operations on the slice file in windows, such as duplicating/renaming it and maybe deleting some copies. When I reloaded the level today, the initial entity and the second entity were missing and I was getting the error.I tried restoring an earlier version from source control. Our source control currently tracks just the project directory, and it hasn't caused problems until now.My questions:What is trying to open the slice asset? Where is it looking?When I created the first entity, I didn't create a slice. Where is that slice stored?Is there a small amount of files that can be added to source control to prevent this? The source control outlined here is not feasible: http://docs.aws.amazon.com/lumberyard/latest/userguide/lumberyard-upgrading.htmlFollow"
Slice asset not loading - source control not fixing problem
https://repost.aws/questions/QUL0_Lsb2QTD6NMiWqsJMkjA/slice-asset-not-loading-source-control-not-fixing-problem
true
"0Accepted AnswerHello @REDACTEDUSERSlice assets are self-contained meaning that all the information it needs is in the .slice asset however they could have dependencies on other assets like lua scripts, other slices...etc. Slice assets are just xml files so you should be able to inspect the files using any text editor.As far as source control goes, committing everything in your assets directory should be fine and is recommended. However, there could be other dependencies outside of that directory like Gems. If your slice depends on an asset in a Gem and you haven't committed that Gem then you would run into cases where the slices wouldn't properly load.Hope that helps!SharerePost-User-5162062answered 6 years ago0Hey @REDACTEDUSERSharerePost-User-5738838answered 6 years ago0Good morning!I am one of the technical artists here on Lumberyard Support, and I am currently looking in to your inquiry. First I wanted to know if I could get a little additional information from you:What version of Lumberyard are you on?Which Visual Studio are you currently using?Which version control software are you using?Thank you so much for taking the time to reach out to us and I hope I can be a help!-MajorBlissSharerePost-User-4093265answered 6 years ago0Thanks for your help. I'm using:LY 1.9.0.1VS 2015 CEgit version 2.13.0 (I am aware that git is not ideal for LY projects... there are too many files and it takes up too much storage. We are hoping to find a solution but we have pushed it off for now.)SharerePost-User-6682533answered 6 years ago"
The newly introduced 6th pillar i.e. Sustainability Pillar is NOT seen in the review. I created a workload in Well-Architected Review tool of AWS today. I used default AWS Well-Architected Framework Lens. While reviewing I could see only the old 5 pillars. Appreciate if someone can provide any insight or solution for getting the 6th pillar as well while reviewing the workload.FollowComment
The new Sustainability Pillar (6th pillar) is not seen in AWS WAR
https://repost.aws/questions/QUJW-cjGWUSA6JHsR3gbKAWg/the-new-sustainability-pillar-6th-pillar-is-not-seen-in-aws-war
false
9Support for the sustainability pillar within the AWS WA Tool will be available in 2022.Please refer https://aws.amazon.com/well-architected-tool/faqs/ for more details.CommentShareSanjay Aggarwalanswered a year agoAWS-User-2798603 a year agoThank you for your quick answer. Also I could see only 3 Lenses. I am interested in other Lenses like SAP lens and Hybrid Network also. Any idea on when these other lenses will be available and supported?ShareSwwapnil a year ago@AWS-User-2798603 -Please check SAP Lens for the AWS Well-Architected Framework - https://aws.amazon.com/blogs/awsforsap/introducing-the-sap-lens-for-the-aws-well-architected-framework/Hybrid Networking Lens:https://docs.aws.amazon.com/wellarchitected/latest/hybrid-networking-lens/hybrid-networking-lens.htmlShare0Here is the AWS Well-Architected Framework of Sustainability Pillar.AWS Blog Link:https://aws.amazon.com/blogs/aws/sustainability-pillar-well-architected-framework/CommentShareSwwapnilanswered a year ago0The Pillar is now available in the AWS Well-Architected Tool.CommentShareSamiranswered a year ago
We've set up cross account access as described here. All our cross account queries have been working up to now.But now we've set up a workgroup that uses Athena engine V3. Our queries have generally worked fine in this workgroup except for some cross account queries. We get errors similar toINVALID_VIEW: line 5:8: Failed analyzing stored view 'cadatasource.cadb.caview': line 16:6: Table 'AWSDataCatalog.mydb.mytable' does not existWe see no errors from these queries when we run them in our primary (engine V2) workgroup. It appears that Athena engine V3 swaps the default data source (AWSDataCatalog in this case) from our account into the cross account view (cadatasource.cadb.caview). It assumes the tables that the cross account view references are in the default data source in the account from which we are issuing the query instead of the actual data source of the view in the other account.FollowComment
[Athena Engine V3] INVALID_VIEW: line 5:8: Failed analyzing stored view 'cadatasource.cadb.caview': line 16:6: Table 'AWSDataCatalog.mydb.mytable' does not exist
https://repost.aws/questions/QUggHfLn8ZSYC751ZFCXdz3w/athena-engine-v3-invalid-view-line-5-8-failed-analyzing-stored-view-cadatasource-cadb-caview-line-16-6-table-awsdatacatalog-mydb-mytable-does-not-exist
false
"after setting up the ses , we OCCASSIONALLY get the error 'Unable to read data from the transport connection: An established connection was aborted by the software in your host machine. ' occasionally, kindly adviceFollowComment"
Unable to read data from the transport connection: An established connection was aborted by the software in your host machine.
https://repost.aws/questions/QUUMUMxcXfTTqkff6bD4FxFg/unable-to-read-data-from-the-transport-connection-an-established-connection-was-aborted-by-the-software-in-your-host-machine
false
"I recently requested a quota increase for my VPCs per Region, but it seems that my request exceeded the resource limits. As a result, I wasn't able to create the cluster I needed to. Here is the error message:Cluster creation failed due to a client error: Cluster creation was cancelled because it will fail from exceeding resources limits. These are the minimum limits for resources required to successfully create your cluster:132.0 TiB EBS gp2, service code: ebs, quota code: L-D18FCD1DHow can I lower my quota, so that I can successfully create a cluster? Thank you!FollowComment"
Quota increase for "VPCs per Region" Exceeded Resource Limit -- How to Lower my Quota?
https://repost.aws/questions/QUQgXgF6ETSQ2--lYH44Btkw/quota-increase-for-vpcs-per-region-exceeded-resource-limit-how-to-lower-my-quota
false
"1As I'm reading it, the error isn't saying the quota is too high. It sounds like the EBS volume being requested is above what you amazon account is allowed. Make sure your values and units are correct.Did the quota increase go through completely, meaning it was applied fully? I'll also mention that many service quota increases are regionally based. So an increase for VPCs allowed in us-west-2 will not increase the number of VPCs allowed in eu-west-1. I would also double check the quota increase was in the correct region and has been successfully processed.CommentShareCarlo Mencarellianswered 4 months ago"
"Hi as stated in the title the SecureTunneling Component fails to restart after an unexpected loss of power. It is able to resume following a reboot which leaves me to believe that some sort of cleanup is skipped. Hope someone can help me figure out where the problem lies.Here is the section of the aws.greengrass.SecureTunneling.log file following the loss of power (the first line being just before the shutdown):2022-09-08T16:24:51.389Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [INFO ] 2022-09-08 16:24:51.388 [pool-3-thread-1] SubscribeResponseHandler - Secure tunnel process completed with exit code: 0. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:31:48.616Z [INFO] (pool-2-thread-16) aws.greengrass.SecureTunneling: shell-runner-start. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=STARTING, command=["java -jar /greengrass/v2/packages/artifacts/aws.greengrass.SecureTunneling/1.0..."]}2022-09-08T16:32:35.374Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [INFO ] 2022-09-08 16:32:35.238 [main] SecureTunneling - Starting secure tunneling component!. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:43.046Z [WARN] (Copier) aws.greengrass.SecureTunneling: stderr. Sep 08, 2022 4:32:41 PM software.amazon.awssdk.eventstreamrpc.EventStreamRPCConnection$1 onConnectionSetup. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:43.057Z [WARN] (Copier) aws.greengrass.SecureTunneling: stderr. INFO: Socket connection /greengrass/v2/ipc.socket:8033 to server result [AWS_ERROR_SUCCESS]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:44.818Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:45.944Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:32:44.681 [main] SecureTunnelingTask - Could not connect to Greengrass Core V2. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:45.947Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. java.util.concurrent.TimeoutException: null. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:45.970Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) ~[?:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:45.972Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) ~[?:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:45.995Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at com.aws.greengrass.component.securetunneling.utils.IPCUtils.connectToGGCOverEventStreamIPC(IPCUtils.java:78) ~[GreengrassV2SecureTunnelingComponent-1.0-all.jar:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.014Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at com.aws.greengrass.component.securetunneling.utils.IPCUtils.getEventStreamRpcConnection(IPCUtils.java:35) ~[GreengrassV2SecureTunnelingComponent-1.0-all.jar:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.037Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at com.aws.greengrass.component.securetunneling.executor.tasks.SecureTunnelingTask.connectToGGCOverEventStreamIPC(SecureTunnelingTask.java:104) ~[GreengrassV2SecureTunnelingComponent-1.0-all.jar:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.040Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at com.aws.greengrass.component.securetunneling.executor.tasks.SecureTunnelingTask.<init>(SecureTunnelingTask.java:52) ~[GreengrassV2SecureTunnelingComponent-1.0-all.jar:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.067Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at com.aws.greengrass.component.securetunneling.executor.SecureTunnelingExecutor.<init>(SecureTunnelingExecutor.java:39) ~[GreengrassV2SecureTunnelingComponent-1.0-all.jar:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.112Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at com.aws.greengrass.component.securetunneling.SecureTunneling.main(SecureTunneling.java:36) ~[GreengrassV2SecureTunnelingComponent-1.0-all.jar:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.119Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:32:45.962 [main] SecureTunneling - Exception initializing secure tunneling task.. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.150Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. com.aws.greengrass.component.securetunneling.exceptions.SecureTunnelingTaskException: Exception running secure tunneling task.. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.152Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at com.aws.greengrass.component.securetunneling.executor.tasks.SecureTunnelingTask.<init>(SecureTunnelingTask.java:56) ~[GreengrassV2SecureTunnelingComponent-1.0-all.jar:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.179Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at com.aws.greengrass.component.securetunneling.executor.SecureTunnelingExecutor.<init>(SecureTunnelingExecutor.java:39) ~[GreengrassV2SecureTunnelingComponent-1.0-all.jar:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.200Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at com.aws.greengrass.component.securetunneling.SecureTunneling.main(SecureTunneling.java:36) ~[GreengrassV2SecureTunnelingComponent-1.0-all.jar:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.211Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. Caused by: java.util.concurrent.TimeoutException. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.229Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) ~[?:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.231Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) ~[?:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.255Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at com.aws.greengrass.component.securetunneling.utils.IPCUtils.connectToGGCOverEventStreamIPC(IPCUtils.java:78) ~[GreengrassV2SecureTunnelingComponent-1.0-all.jar:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.278Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at com.aws.greengrass.component.securetunneling.utils.IPCUtils.getEventStreamRpcConnection(IPCUtils.java:35) ~[GreengrassV2SecureTunnelingComponent-1.0-all.jar:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.280Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at com.aws.greengrass.component.securetunneling.executor.tasks.SecureTunnelingTask.connectToGGCOverEventStreamIPC(SecureTunnelingTask.java:104) ~[GreengrassV2SecureTunnelingComponent-1.0-all.jar:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.295Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. at com.aws.greengrass.component.securetunneling.executor.tasks.SecureTunnelingTask.<init>(SecureTunnelingTask.java:52) ~[GreengrassV2SecureTunnelingComponent-1.0-all.jar:?]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:46.317Z [INFO] (Copier) aws.greengrass.SecureTunneling: stdout. ... 2 more. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}2022-09-08T16:32:47.919Z [INFO] (Copier) aws.greengrass.SecureTunneling: Run script exited. {exitCode=1, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}FollowComment"
SecureTunneling Component fails to restart after unexpectedly loosing power.
https://repost.aws/questions/QUzsQx3j-WRvaBMNRlMd31tg/securetunneling-component-fails-to-restart-after-unexpectedly-loosing-power
false
"0I reached the character limit in the post...Here is the section of the aws.greengrass.SecureTunneling.log file following a reboot (with this time the two first line being the ones before the reboot)[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [INFO ] 2022-09-08 16:16:54.059 [pool-3-thread-1] SubscribeResponseHandler - Secure tunnel process completed with exit code: 0. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: Run script exited. {exitCode=143, serviceName=aws.greengrass.SecureTunneling, currentState=FINISHED}[INFO] (pool-2-thread-16) aws.greengrass.SecureTunneling: shell-runner-start. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=STARTING, command=["java -jar /greengrass/v2/packages/artifacts/aws.greengrass.SecureTunneling/1.0..."]}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [INFO ] 2022-09-08 16:21:43.500 [main] SecureTunneling - Starting secure tunneling component!. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[WARN] (Copier) aws.greengrass.SecureTunneling: stderr. Sep 08, 2022 4:22:05 PM software.amazon.awssdk.eventstreamrpc.EventStreamRPCConnection$1 onConnectionSetup. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[WARN] (Copier) aws.greengrass.SecureTunneling: stderr. INFO: Socket connection /greengrass/v2/ipc.socket:8033 to server result [AWS_ERROR_SUCCESS]. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[WARN] (Copier) aws.greengrass.SecureTunneling: stderr. Sep 08, 2022 4:22:08 PM software.amazon.awssdk.eventstreamrpc.EventStreamRPCConnection$1 onProtocolMessage. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[WARN] (Copier) aws.greengrass.SecureTunneling: stderr. INFO: Connection established with event stream RPC server. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [INFO ] 2022-09-08 16:22:08.353 [main] SecureTunnelingExecutor - Starting secure tunneling.. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [INFO ] 2022-09-08 16:22:10.456 [Thread-1] SecureTunnelingTask - Successfully subscribed to topic: $aws/things/Lumca_98f07b5d2fcf/tunnels/notify. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [INFO ] 2022-09-08 16:23:30.669 [Thread-1] SubscribeResponseHandler - Received new tunnel notification message.. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.584 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: 2022-09-08T16:23:31.488Z [WARN] {FileUtils.cpp}: Permissions to given file/dir path '/tmp/' is not set to recommended value... {Permissions: {desired: 745, actual: 777}}. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.605 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: 2022-09-08T16:23:31.489Z [WARN] {Config.cpp}: Path replace_with_root_ca_file_location to RootCA is invalid. Ignoring... Will attempt to use default trust store.. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.616 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: 2022-09-08T16:23:31.489Z [INFO] {Config.cpp}: Successfully fetched JSON config file: {. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.620 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "endpoint": "replace_with_endpoint_value",. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.637 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "cert": "replace_with_certificate_file_location",. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.638 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "key": "replace_with_private_key_file_location",. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.639 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "root-ca": "replace_with_root_ca_file_location",. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.657 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "thing-name": "replace_with_thing_name",. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.658 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "logging": {. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.667 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "level": "ERROR",. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.676 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "type": "STDOUT",. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.686 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "file": "/var/log/aws-iot-device-client/aws-iot-device-client.log". {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.687 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: },. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.702 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "jobs": {. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.703 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "enabled": false,. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.716 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "handler-directory": "replace_with_path_to_handler_dir". {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.718 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: },. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.720 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "tunneling": {. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.720 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "enabled": true. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.721 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: },. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.741 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "device-defender":{. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.742 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "enabled":false,. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.743 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "interval": 300. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.753 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: },. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.759 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "fleet-provisioning": {. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.760 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "enabled": false,. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.761 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "template-name": "replace_with_template_name",. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.762 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: "csr-file": "replace_with_csr-file-path". {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.762 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: }. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.763 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: }. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [ERROR] 2022-09-08 16:23:31.786 [pool-3-thread-1] SubscribeResponseHandler - Secure Tunneling Process: 2022-09-08T16:23:31.490Z [DEBUG] {Config.cpp}: Did not find a runtime configuration file, assuming Fleet Provisioning has not run for this device. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}[INFO] (Copier) aws.greengrass.SecureTunneling: stdout. [INFO ] 2022-09-08 16:24:51.388 [pool-3-thread-1] SubscribeResponseHandler - Secure tunnel process completed with exit code: 0. {scriptName=services.aws.greengrass.SecureTunneling.lifecycle.run.script, serviceName=aws.greengrass.SecureTunneling, currentState=RUNNING}CommentSharenicanswered 9 months agoGreg_B EXPERT9 months agoIs this running on a Raspberry Pi? Can you see the console when it boots up after re-powering? If so, is fsck running during the boot, before Greengrass starts up?Share0Hello,Can you update GreenGrass Nucleus and see if that solves the problem?Thanks!ShaneCommentShareshangablanswered 6 months ago"
"I have tested AWS VPN Client app with two versions of OpenVPN config:config-a.ovpn: The ca, cert, key payloads are specified as file paths (These files definitely exist!)clientdev tunproto udpremote cvpn-endpoint-XXXX.prod.clientvpn.us-west-2.amazonaws.com 443remote-random-hostnameresolv-retry infinitenobindremote-cert-tls servercipher AES-256-GCMverb 3ca /foo/bar/ca.crtcert /foo/bar/client.crtkey /foo/bar/client.keyreneg-sec 0config-b.ovpn: The ca, cert key payloads are inlined in the config file. (using xml-like tags)clientdev tunproto udpremote cvpn-endpoint-XXXX.prod.clientvpn.us-west-2.amazonaws.com 443remote-random-hostnameresolv-retry infinitenobindremote-cert-tls servercipher AES-256-GCMverb 3<ca>...</ca><cert>...</cert><key>...</key>reneg-sec 0While the config-b.ovpn doesn't have any issue establishing connections, the config-a.ovpn causes an error message popup saying, "VPN process quit unexpectedly".I have confirmed that config-a.ovpn itself is valid: openvpn --config config-a.ovpn has no issue.[edit]More infomration:VPN Client app: AWS VPN Client 3.1.0Operation System: macOS 12.6 (M1 max)FollowComment"
AWS VPN Client cannot handle some OpenVPN options.
https://repost.aws/questions/QUUdyYxtvkQyKCpiP6V9IXyQ/aws-vpn-client-cannot-handle-some-openvpn-options
false
"0[Hi,I tested with the exact same configuration and it works perfectly fine. I tested in windows and pls find the snippet of the client logs.2022-10-21 18:14:58.020 +08:00 [INF] Validating ca path: c:\Temp\ca.crt2022-10-21 18:14:58.200 +08:00 [DBG] Validating file path: c:\Temp\ca.crt2022-10-21 18:14:58.276 +08:00 [DBG] Backslash count: 42022-10-21 18:14:58.276 +08:00 [DBG] Double backslash count: 22022-10-21 18:14:58.277 +08:00 [INF] Validating cert path: c:\Temp\svr.crt2022-10-21 18:14:58.277 +08:00 [DBG] Validating file path: c:\Temp\svr.crt2022-10-21 18:14:58.333 +08:00 [DBG] Backslash count: 42022-10-21 18:14:58.333 +08:00 [DBG] Double backslash count: 22022-10-21 18:14:58.334 +08:00 [INF] Validating key path: c:\Temp\svr.key2022-10-21 18:14:58.334 +08:00 [DBG] Validating file path: c:\Temp\svr.key>2022-10-21 18:14:59.700 +08:00 [DBG] CM received: >LOG:1666347299,,VERIFY OK: depth=1, CN=abcserveraLOG:1666347299,,VERIFY KU OKLOG:1666347299,,Validating certificate extended key usageLOG:1666347299,,++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server AuthenticationLOG:1666347299,,VERIFY EKU OKLOG:1666347299,,VERIFY OK: depth=0, CN=serversfsdfsfLOG:1666347299,,Control Channel: TLSv1.2, cipher TLSv1/SSLv3 ECDHE-RSA-AES256-GCM-SHA384, 2048 bit RSALOG:1666347299,I,[server] Peer Connection Initiated with [AF_INET]X.X.X.X:443I dont see you have any issues with open vpn configuration file. since you have place the correct certificate and keys in place.I would suggest you to look for openvpn client logs which gives you more information.Log file location:- https://openvpn.net/vpn-server-resources/troubleshooting-client-vpn-tunnel-connectivity/]()CommentShareAWS-User-7653869answered 7 months agojinux 7 months agoI forgot to mention that I am using AWS VPN Client 3.1.0 as a VPN client on macOS. The link you refer to me is for OpenVPN Connect client. "/Library/Application Support/OpenVPN" directory does not exist on my machine.Share"
"Customer who is using Terraform, encountered issues with racing condition of IAM role and AWS Resources creation.For example, in their TF creation of a Step Function and IAM role, the Step Function failed due to missing IAM role dependencies.I don't recall encounter similar issue in Cloudformation. can I ask if Cloudformation has internal dependencies check on such eventual consistency such as IAM role prior to other resources creation, and that is missing in Terraform?Also, they have raised a ticket to Terraform (https://github.com/terraform-providers/terraform-provider-aws/issues/7893), is there any stop gap? One option I can think of is to pre-create the IAM role prior to the AWS resources.FollowComment"
Racing condition of IAM role creation and AWS resources
https://repost.aws/questions/QUuRbUA7NPSNCZHjP91uW_1Q/racing-condition-of-iam-role-creation-and-aws-resources
true
"0Accepted AnswerYou'll need to await Terraform maintainers to comment on the bug that has been raised but in essence, CFN checks for resources to be 'stable' before continuing onto dependent items. It's happened before too - see https://github.com/terraform-providers/terraform-provider-aws/issues/838 for example.This is a great opportunity to talk about the advantages of CFN.CommentShareAWS-User-8827288answered 4 years ago"
"One frustrating thing about using Polly is finding that small changes to a sentence have a dramatic impact on the way words in the sentence are pronounced. For example, consider the following sentences:Your watch will now be activated.Your car will now be activated.If you have Polly say this using English - Indian - Raveena, the word "activated" will be pronounced very differently between the two sentences. In fact, the "will now" will also be pronounced a bit differently.Is there anything I can do, aside from fully manually defining a phoneme for the word "activated", to get Polly to use one of the pronounciations instead of the other, regardless of the structure of the sentence? I realize there's a lot of smarts involved in getting a sentence to be spoken, but it's frustrating to know she can say "activated" better than she is in my particular sentence.Thanks.FollowComment"
How to explicitly choose from several pronounciations
https://repost.aws/questions/QUw9RzrZbSSu-jYmJVCrqmsA/how-to-explicitly-choose-from-several-pronounciations
false
"0Hi DanGoyette,Thanks for reporting this issue. Unfortunately, it looks like this is a unit selection issue that is only reproducible in this instance. Currently, there is no easy way to fix the rootcause. But you can work around the issue by tweaking the phonemes in the phoneme tag slightly and passing it:Your watch will now be <phoneme alphabet="x-sampa" ph='%ak.tIv."eIt.Id'> activated.</phoneme> ```Best, AanushaCommentShareAanushaanswered 4 years ago0Thank you for the help. I raises one other question: In order to make it easier to author phoneme tags, is there a way to output speech as phonemes? That way, if I like the way the system pronounces something, I could ideally copy/paste the phoneme it's using.I tried to change the output type to Speech Marks, selecting SSML as the Speech Mark type, hoping that it would output text with phoneme tags. However, it only appears to output an empty file. And I wouldn't be surprised if phoneme tags are just used as input into your system, and the system doesn't generate phoneme tags as part of its text to speech conversion process.Anyway, the more general question is whether there's an easier way to author phoneme tags that sound correct, other than trial-and-error?Thanks.CommentShareDanGoyetteanswered 4 years ago"
When trying to change an Aurora DB column that is a part of a foreign key an error is thrown:Error 1832: Cannot change column 'A_id': used in a foreign key constraint 'B_ibfk_1'This makes Aurora incompatible with some of MySQL-tested products like this: https://github.com/ory/kratos/issues/2044This issue is reproducible with Golang mysql driver and Aurora DB as a database.Here is the code to reproduce the problem: https://github.com/splaunov/aurora-fk-problemThe same Golang code works fine with MySQL db.FollowComment
Cannot change column used in a foreign key constraint
https://repost.aws/questions/QUxS9khe0VTxaHMd6ViWIKrg/cannot-change-column-used-in-a-foreign-key-constraint
false
"I'm trying to deploy my own GreengrassV2 components. It's a SageMaker ML model (optimized with SageMakerNeo and packaged as a Greengrass component) and the according inference app. I was trying to deploy it to my core device with SageMaker Edge Manager component. But it is always stuck in the status "In progress".My logs show this error:com.aws.greengrass.tes.CredentialRequestHandler: Error in retrieving AwsCredentials from TES. {iotCredentialsPath=/role-aliases/edgedevicerolealias/credentials, credentialData=TES responded with status code: 403. Caching response. {"message":"Access Denied"}}But how do I know which policies are missing?FollowComment"
Greengrass own component deployment stuck "in progress"
https://repost.aws/questions/QUlDu9VAj4Qx-O7cbAXDz28w/greengrass-own-component-deployment-stuck-in-progress
true
"2Accepted AnswerHello, please refer to https://docs.aws.amazon.com/greengrass/v2/developerguide/troubleshooting.html#token-exchange-service-credentials-http-403 for troubleshooting, you'll need iot:AssumeRoleWithCertificate permissions on your core device's AWS IoT role aliasCommentShareJoseph Cosentinoanswered a month ago"
"For Athena's SDK, is there a way to start the query execution using the ID of my saved query (NOT the execution ID). I'm currently using Java and Python (boto3), and I'm wondering if there is a functionality that takes the saved query's ID as input and starts the query execution?For example. I want something in Python as follows:...athena_client = boto3.client('athena')response_query_execution_id = athena_client.start_query_execution( QueryID='the ID of my saved query')...FollowComment"
Can I use the ID of my saved query to start query execution in Athena SDK?
https://repost.aws/questions/QUa4CRtPZhR4-5nv7PgReHjQ/can-i-use-the-id-of-my-saved-query-to-start-query-execution-in-athena-sdk
false
"0Hello,I am afraid to inform you that start query execution accepts query-string only, it does not accept execution id. Basically start query execution return a query execution id which states query successfully submitted to Athena. Having said that there is not any direct way to start query execution using execution id.You can consider one approach like use get-query-execution with query id which returns query string and pass this query string in start-query-execution.aws athena get-query-execution --query-execution-id aws athena start-query-execution --query-stringCommentShareShubham_Panswered a year ago"
Upgrade of ElastiCache fails with the following error:*The selected service updates could not be applied on the following redis clusters:Error Type: InvalidParameterValueException : Error Message: The service update cannot be applied to global replication groupsTrying to apply the upgrade from the Global DataStore page gives the following errorModifyGlobalReplicationGroup API does not support cache parameter group modification without a change in major engine versionFollowComment
Problem upgrading AWS ElastiCache Redis cluster
https://repost.aws/questions/QULswdbhYFSNeCn9QiE4MnjA/problem-upgrading-aws-elasticache-redis-cluster
false
"0Hello,

I understand that you are experiencing the errors below when attempting to upgrading AWS ElastiCache Redis cluster:
“The selected service updates could not be applied on the following redis clusters: Error Type: InvalidParameterValueException : Error Message: “The service update cannot be applied to global replication groups.
“ModifyGlobalReplicationGroup API does not support cache parameter group modification without a change in major engine version”

Before trying to apply the upgrade from the Global DataStore, please ensure that you use Redis engine version 5.0.6 or higher and R5, R6g, R6gd, M5 or M6g node types for your AWS ElastiCache Rediscluster.

Please see the Prerequisites and limitations for Global Datastore:[1] https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Redis-Global-Datastores-Getting-Started.html
I hope the above information is helpful.CommentSharePhindianswered a year ago"
"Hi,are there any plans for EMR Serverless service to utilize Spot capacity? Similar e.g. to Fargate Spot for ECS.Thank you,NikosFollowComment"
EMR Serverless on Spot capacity?
https://repost.aws/questions/QUOo_XfYuwR7WeL49ZWUNLSA/emr-serverless-on-spot-capacity
true
"0Accepted AnswerThank you raising this question on re:Post.AWS Serverless is a managed service and hence the resources are managed by the EMR Service. From the documentation I can only see choices limited to the architecture (x86 or arm64) and there are no choices on instance-types in use. This is a design choice to abstract out the complexity to manage a cluster from AWS users, so that the users can focus more on their business solutions and not on managing the clusters.I understand the question about spot instance may be related to pricing and if you can expect cheaper option in the future. Unfortunately, the information on upcoming changes and features are internal to AWS EMR Service Team. I would recommend you to monitor the release page for EMR ServerlessCommentShareSUPPORT ENGINEERKrishnadas Manswered a month ago"
"I want to extract only specific data from an invoice using amazon textract currently all the data including form fields and table fields are getting extracted and their contains some data that the user does not require, the user only cares about specific fields such as invoice date, invoice number etc. So i was wondering if their was a way to resolve this issue.FollowComment"
Extract only specific data from invoice using amazon textract
https://repost.aws/questions/QUy2J2vMYGSRSuMeuc4o6ubw/extract-only-specific-data-from-invoice-using-amazon-textract
false
"0Amazon Textract automatically detects the vendor name, invoice number, ship to address, and more from the sample invoice and displays them on the Summary Fields tab. It also represents the standard taxonomy of fields in brackets next to the actual value on the document. For example, it identifies “INVOICE #” as the standard field INVOICE_RECEIPT_ID.Please refer to this blog which provides some good technical insight : https://aws.amazon.com/blogs/machine-learning/announcing-expanded-support-for-extracting-data-from-invoices-and-receipts-using-amazon-textract/CommentShareEXPERTAWS-User-Nitinanswered 6 months ago"
"The stated DDL timeout for Athena is 30 minutes, according to quotas. We regularly see DROP TABLE failing with "DDL execution timed out" after 60 seconds. This causes a lot of problems for me.To reproduce - my branch of dbt-athena here has a test that fails due to this timeout issue to drop an iceberg table every time. Athena Engine v3.Is there any way to fix it? Is this a known issue? Will a fix be forthcoming?FollowComment"
Athena Iceberg DROP TABLE DDL timeout after 60s?
https://repost.aws/questions/QUqVwL0QU7T1W2xwmXB8XTWg/athena-iceberg-drop-table-ddl-timeout-after-60s
false
I set everything up following this guide.When I try to login using AWS access portal URL the following happensI get redirected to GoogleI select my account (that I have manually added as a user)I get an error:Something went wrongLooks like this code isn't right. Please try again.Any pointers are much appreciatedFollowComment
IAM Identity Center – Google Workspace doesn't work as identity source
https://repost.aws/questions/QUBMcTRkUwThGUQvTeg5SXvg/iam-identity-center-google-workspace-doesnt-work-as-identity-source
false
"2Hi! I was struggling with the same problem. For me, I had to ensure that the username of the user was the same as the email of the user in the google admin panel. My user was named 'jane' instead of 'jane@example.com', which caused SSO to fail, despite that user's email being jane@example.comCommentSharerePost-User-4015773answered 6 months agorePost-User-4015773 6 months agosee here https://github.com/awsdocs/aws-single-sign-on-user-guide/issues/2Sharemarpin-apc 3 months agoRan into the same issue. Setting the user name to the email address resolved the problem.Shareeightnoteight 21 days agoran into the same issue, thank you repost user 4015773Share0Did you check the CloudTrail logs to check if you see any errors related to "ExternalIdPDirectoryLogin"CommentShareap16answered 6 months agoAmelia Crowther 6 months agoi have the same problem as this person and havent been able to find any errors containing "ExternalIdPDirectoryLogin" in cloudtrail logsShare"
"Dear forum,using AWS Pinpoint we want to send and receive SMS to and from our customers. At the moment we are already using another 3rd party service to do so and plan to migrate to AWS (where all the rest is already hosted).We are having international customers (so numbers have various country codes) and started buying two long codes: one for Germany (+49) and one for Netherlands (+31).While testing Pinpoint, with a few German numbers (+49), we noticed the following:Outgoing, we can send SMS from Pinpoint to our German numbers using the German long code as "originationNumber"Incoming, we can send SMS from our German numbers to Pinpoint when sending to our German long code (SMS is handed over to SNS reliable)However:Outgoing, when we use the Netherland long code as "originationNumber" to send SMS to German numbers, we always receive SMS from the German long code (so not the explicitly specified Netherland long code). It does not matter if we do it via API or test message UI in Pinpoint.Incoming, when we send from a German number towards the Netherland long code, the incoming SMS is not received at all.We know from our currently used 3rd party service for SMS handling, that international roaming can be tricky. While sending works reliable there internationally (so from +49 to a +31 number as example), they also have issues with receiving sometimes.Questions:why can't we send from the +31 number to a +49 number, although we explicitly specified the "originationNumber"?is it expected that pinpoint can't receive an SMS from a number with a different country code?GreetingsMarkusFollowComment"
Pinpoint SMS - International Usage
https://repost.aws/questions/QUkoDr0bvWTDezU3aeBwSuSQ/pinpoint-sms-international-usage
false
"Without using application load balancer, I would like to bind Cloudfront to EC2 instance. I created a distribution with domain name ec2-*******.**.amazonaws.com with HOST as header but I got502 ERROR CloudFront wasn't able to connect to the origin. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.When I click ec2-*******.**.amazonaws.com , no page opens.FollowComment"
CDN for EC2 instance
https://repost.aws/questions/QUwVuNQMz_QAyGmn8MAal8XQ/cdn-for-ec2-instance
false
"0Hi, I'm not sure what you mean "When I click ec2-..amazonaws.com , no page opens". Is ec2-..amazonaws.com running as a publicly-accessible webserver? If so, it should be OK as an origin for CloudFront.CommentShareEXPERTskinsmananswered a year agoTechxonia a year agoI am not sure about this. When I go public IPv4 address, my website opensShare"
"I am using the Aurora Blue/Green deployment process to upgrade by database from mySQL5.7 to mySQL8.0.26. This also is upgrading the Aurora engine from 2 to 3.The upgrade fails due to a pre-check failure:{ "id": "engineMixupCheck", "title": "Tables recognized by InnoDB that belong to a different engine", "status": "OK", "description": "Error: Following tables are recognized by InnoDB engine while the SQL layer believes they belong to a different engine. Such situation may happen when one removes InnoDB table files manually from the disk and creates e.g. a MyISAM table with the same name.\n\nA possible way to solve this situation is to e.g. in case of MyISAM table:\n\n1. Rename the MyISAM table to a temporary name (RENAME TABLE).\n2. Create some dummy InnoDB table (its definition does not need to match), then copy (copy, not move) and rename the dummy .frm and .ibd files to the orphan name using OS file commands.\n3. The orphan table can be then dropped (DROP TABLE), as well as the dummy table.\n4. Finally the MyISAM table can be renamed back to its original name.", "detectedProblems": [ { "level": "Error", "dbObject": "mysql.general_log_backup", "description": "recognized by the InnoDB engine but belongs to CSV" } ] }As an Aurora user, it is not possible for me to delete, move, move, alter or change any tables in the mysql tablespace, so the recommend remediation is not possible.So my question is, how can I force the Blue/Green process to skip this check, or even better, how can I manually DROP the mysql.general_log_backup table as I do not need it?Please note I am using "FILE" based logging the DB parameters.Steps to reproduce:Create an aurora instance with Engine version 5.7.mysql_aurora.2.10.3start a blue green deployment withengine version 8.0 and aurora3+use custom cluster parameter groupuse custom instance parameter groupBlue Green environment createdDB Engine Upgrade failsThanks!FollowComment"
How can I bypass prechecks for RDS engine upgrades?
https://repost.aws/questions/QUOCwE_YkxQoSKBI6Ev8VBZw/how-can-i-bypass-prechecks-for-rds-engine-upgrades
false
"0Hi,Thanks for sharing the detailed information.I have checked internally regarding to the issue, this will need to engage the internal team to apply mitigations from AWS end. Sincerely apologize for the inconvenience. But if you have support plan, I would encourage you to raise a support case for us to take a look and mitigate the problem.Before applying mitigation from AWS end, please also make sure that this is the only pre-check errors found.CommentShareSUPPORT ENGINEERKevin_Zanswered 2 months ago"
"Appreciate any input on this - previous image creation on this Workspace has occurred w/o difficulties, but after restoring a Win19 Workspace that was in an 'unhealthy' state, image creation now fails. When running ImageChecker.exe, an error of 'No AppX packages can be in a staged state' is thrown. However, listing these through PowerShell 'Get-AppxPackage -AllUsers' shows none in a staged state. Running all of the powershell commands for this issue in the 'Tips' section of the Workspaces Admin Guide doesn't fix the problem, either, even after reboot (and all of the AppX packages left are System AppX packages.Wondering if anyone else has encountered this and/or any solutions? I googled a bit, including SysPrep resources, without any luck. Wondering what set of PowerShell cmdlets might allow me to discover exactly which AppX package is causing the problem and I can likely force it to be unloaded with that info.Thanks!MEFollowComment"
"ImageChecker fails d/t staged state of AppX pkgs, but no staged AppX found"
https://repost.aws/questions/QUnXFFmM_VRBGKujyeh4Eu_Q/imagechecker-fails-d-t-staged-state-of-appx-pkgs-but-no-staged-appx-found
false
"2So, ended up figuring this out with some effort:Some Windows features were uninstalled in an effort to streamline the resulting images/bloat, but this caused those AppX packages to be marked as staged instead of installed. They cannot be removed (or not easily, and should not). They did not show up as staged in the cmdlets that are in the AWS tips section for this. In order to reveal these, the following code needs to be executed:Get-AppxPackage -AllUsers | Format-List -Property PackageFullName,PackageUserInformation(see this Microsoft article for a full explanation of the problem and how to check: https://docs.microsoft.com/en-US/troubleshoot/windows-client/deployment/sysprep-fails-remove-or-update-store-apps)Once the packages in question are identified, I did actually try to uninstall them, but due to ownership by the system 'nt authority\system' it is not really possible to do so (or appears so without things getting rather messy). The solution was to simply change the status from Staged to Installed by forcing their install to the user. For instance, to just cycle through the entire AppX list and install any listed packages that are not already in the Installed state:Get-AppxPackage -AllUsers | Foreach {Add-AppxPackage -DisableDevelopmentMode -Register "$($_.InstallLocation)\AppXManifest.xml"}Kudos to this article on how to do so: https://www.howtogeek.com/224798/how-to-uninstall-windows-10s-built-in-apps-and-how-to-reinstall-them/CommentSharemechEanswered 2 years ago"
"I am new to this and in wordpress it asks me to update the PHP 7.0 a 7.4 , try it but it does not work out :( I appreciate your helpFollowComment"
How do I update the PHP on AWS Lightsail?
https://repost.aws/questions/QU1V112m6ZR5CW9b43wa2y0A/how-do-i-update-the-php-on-aws-lightsail
false
"0Lightsail is a regular VM. That means that you can login to it and update the software.The method depends on what OS you chose for your VM when creating your instance.Note that there are images for lightsail that include Wordpress pre-configured which of course is a lot simpler.CommentShareYair Carelanswered 4 months ago0You have two general options:Create a newer WordPress instance and transfer the content (the latest WP instances run PHP 8) https://lightsail.aws.amazon.com/ls/docs/en_us/articles/migrate-your-wordpress-blog-to-amazon-lightsailCreate an instance snapshot and use apt-get to upgrade to a newer version of PHP on the instance.Migrating your content to a new instance is likely the safest route because you can export the content and test on the new instance before pointing the Static IP to the new instance (or updating the DNS record, but we recommend you configure a Static IP so you have more options in the future).It may require fiddling with some settings or using a 3rd party plugin if you have a lot of embedded images/media. The other advantage to this approach is that all the system software will be more up-to-date and tested together.CommentShareGabriel - AWSanswered 4 months ago"
"Hello,This is a qualitative question. When I log on to RDS on my aws management console the console does not show all of my databases.The URL automatically defaults to the us-west-2 region, and on the console it says DB Instances (0/40). I was confused though, because I had created a db only a day ago, and I was able to still connect to it through CLI, though somehow db instances was still 0/40.I noticed I had to go into the URL and change the region from us-west-2 to us-east-1, and then magically DB Instances (1/40) with my database was right there.Is it possible to see databases across all regions in the RDS console without having to change the URL?FollowComment"
RDS console does not show all databases
https://repost.aws/questions/QUx03vTR1VQ1KOriLu8bwHsw/rds-console-does-not-show-all-databases
true
"0Accepted AnswerThe RDS console is separated by region, so it is not possible to see the databases for all regions.If you want to see the databases for all regions, you can create a shell script using AWS CLI or other methods.However, it is a shell script and cannot be displayed in the RDS console.CommentShareEXPERTRiku_Kobayashianswered 3 days ago0Yes, you can't see all RDS instances from multiple regions via the console yet.Ibrahim Mohamed (albarki) did create a nifty script that will provide the list for you.You can find it at his GitHub repository - https://gist.github.com/albarki/3588354ef11a137e199e29381fb07de1Hope this helps.CommentShareCarlos del Castilloanswered 2 days ago0Can you see dB's via DBeaver?CommentSharemyronix88answered a day ago"
"We have a RDS proxy in front of an MySQL aurora instance.Many of the metrics I would expect to see are not showing: DatabaseConnectionsCurrentlySessionPinned, DatabaseConnectionsCurrentlyInTransaction, DatabaseConnectionsCurrentlyBorrowed. These all return missing not 0 for any timeframe.I can connect to the proxy (using a connection on the proxy) and type BEGIN to open a transaction, and I would expect the DatabaseConnectionsCurrentlyInTransaction metric to be at least 1 until I roll back the transaction or commit it. However the data is still 'missing'.Any ideas as to how to resolve this?FollowComment"
Some metrics missing for RDS proxy MySql Aurora
https://repost.aws/questions/QUWj6NLZUOQ8qFBs_2_9JKxA/some-metrics-missing-for-rds-proxy-mysql-aurora
true
"0Accepted AnswerThank you for reaching out. I understand you are unable to see any data for the below metrics :DatabaseConnectionsCurrentlySessionPinned, DatabaseConnectionsCurrentlyInTransaction, DatabaseConnectionsCurrentlyBorrowed.The above 3 metrics is a direct result of Dimension set 1, Dimension set 3, Dimension set 4 with the statistic as SUM.Dimension set 1: ProxyNameDimension set 3: ProxyName, TargetGroup, TargetDimension set 4: ProxyName, TargetGroup, TargetRoleAdditionally, Pinning is a situation when RDS Proxy cannot be sure that it is safe to reuse a database connection outside the current session. This usually occurs when the client mutates the session state (variables and configuration parameters that you can change through SET or SELECT statements). In such a case, it keeps the session on the same connection until the session ends.Typically pinning occurs in the following conditions :session parameters are changedtemp tables are createdlocking functions are calledprepared statements are useduser-defined variables are declaredI understand you are running query and if the block of code satisfies the above then that session will get pinned.The above information will provide you with details to triage the issue. Please note this is general guidance only. In order to understand the issue fully, we require details that are non-public information. Please open a support case with AWS using the following link.As always, Happy Cloud Computing.CommentShareSUPPORT ENGINEERArnab_Sanswered a year agoelectric_al a year agoThanks!Yes I can find the metrics if I go and find them in cloudwatch metrics, and I can see the values are > 0.However my initial question was about the 'metrics' tab in the actual RDS proxy AWS section, that still shows 'no data'. From the above that a) the data is there in cloudwatch metrics and b) its not showing on the RDS proxy page, I suggest this is a bug?Share0@electric_al I have the same issue with the following metrics not showing in the RDS proxy details page metrics list:DatabaseConnectionsCurrentlyInTransactionDatabaseConnectionsCurrentlySessionPinnedDatabaseConnectionsCurrentlyBorrowedClientConnectionsSetupFailedAuthDatabaseConnectionsSetupFailedQueryDatabaseResponseLatencyI am able to add all of these metrics other than ClientConnectionsSetupFailedAuth to a cloudwatch graph directly using the metrics selection list.The source of the issue is that the RDS proxy details page - metrics graphs configuration for these 6 metrics are filtered on the target-group and target. I have a case open with AWS support to have the metrics reconfigured for the RDS proxy details page with the filters removed.CommentShareChris Porteranswered 7 months ago"
"Hello, I am in need of some troubleshooting help.I am running Greengrassv2 and are building a lambda component. The target runtime (greengrass nucleus) runs on a raspberry pi (arm32v7). Hence I am creating the lambda deployment package in a container based on arm32v7/python:3.9-slim-bullseye. My lambda deployment package contains e.g. "_awscrt.cpython-39-arm-linux-gnueabihf.so" which makes me think I have made a correct compilation of the wheels.Now to the issue I am facingMy lambda function component works as expected until I do an import such as from awsiot.greengrasscoreipc.clientv2 import GreengrassCoreIPCClientV2. When I do that the function just hangs, without errors, I have a debug printout before the import that is displayed correctly in the component log.I am running the latest version of all components, e.g. 2.7.0 of greengrass nucleus.The component configuration states "containerMode": "GreengrassContainer".I do not yet have that much debug information, since the lambda-component log is quiet and the and greengrass.log is happy pappy, its latest row just states that is has published to Local PubSub topic (which triggered my Lambda component).If I execute the lambda function in AWS I instantly receive an error: "No module named 'pycares._cares'". <---- which can be expected since the wheels are not compiled for amazon linux2 arch.I am suspecting something about python version, I have yet to check which python version is used by greengrass at runtime. However the raspbery pi is using pyenv to set python 3.8.3 globally. And as mentioned I have compiled the lambda deployment package using python3.9. This just struck me so I will investigate this asap.Any troubleshooting tips are very welcome.FollowComment"
AWS Greengrass lambda component hangs on python import of GreengrassCoreIPCClientV2
https://repost.aws/questions/QUh_jZ85XPQ1SQ2X4OipxM2g/aws-greengrass-lambda-component-hangs-on-python-import-of-greengrasscoreipcclientv2
true
"2Accepted AnswerHi @rikerikI would highly recommend creating a Greengrass v2 custom component instead of a Greengrass Lambda to run your code on Greengrass. In Greengrass V2, Lambdas are supported for backward compatibility to Greengrass v1.To get you started, take a look at this tutorial, Step 5 of the tutorial has step-by-step instructions to create a simple python based Hello World component.Greengrass v2 also provides a Greengrass Development Kit CLI that simplifies the creation and deployment of components to AWS Greengrass.Compared to Lambdas, Greengrass components run as an OS process and not in a container. This makes dependency management easier. Additionally, you can add setup commands to install dependencies in the Greengrass component recipe.Greengrass Components can also subscribe to local PubSub topics, a python code sample can be found hereHope that helpsCommentShareEXPERTJan_Banswered 8 months agorikerik 8 months agoThank you, migrating to a custom component would definably have solved my issue since it had to do with lack of working memory in the container.Since I am using serverless framework to build and deploy the lambda function I refrained from immediately switching over to building a custom component. Is it your opinion that lambdas will be deprecated in the future of GGv2?And a question, you wrote: "Additionally, you can add setup commands to install dependencies in the Greengrass component recipe." in your answer.Are you by that recommending that I should do "pip install" like that and by that have the wheels compiled correctly since it would execute on the target OS?ShareJan_B EXPERT8 months agoI would still recommend creating a virtual environment per component to isolate the dependencies, here a sample recipe :...Manifests: - Platform: os: /linux|darwin/ Lifecycle: Install: |- python3 -m venv venv . venv/bin/activate pip3 install --upgrade pip pip3 install wheel pip3 install awsiotsdkRun: Script: |- . venv/bin/activate python3 -u {artifacts:path}/mycomponent.pyShare1Hi,The reason why this is happening is that importing those additional modules is putting your application above the memory limit and then killing it. The memory limit is configured when you create the component from a lambda, the default is 16MB. You can address this by using "NoContainer" mode instead of a container. You may also increase the memory limit or simply use a native component as suggested by @Jan_B.If you do change the memory limit or container mode, make sure that you use "RESET": [""] in the deployment configuration update for the component such that the new default values will be used instead of the existing values on the device. https://docs.aws.amazon.com/greengrass/v2/developerguide/update-component-configurations.html#reset-configuration-update.Cheers,MichaelCommentShareEXPERTMichaelDombrowski-AWSanswered 8 months agorikerik 8 months agoThank you!You were spot on, the container did indeed run out of memory. I upped the lambda component to 32MB and I am now able to import the GreengrassCoreIPCClientV2 class.I guess this means that the container is running armv7l just as the host OS, otherwise the import should have failed, right?Share"
"Hello, I was following this guide for installing SSL certificate https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-using-lets-encrypt-certificates-with-wordpressEverything seemed to be going well until step 7, I tried to restart the bitnami services using the command instructed and I got the following message"Job for bitnami.service failed because the control process exited with error code.See "systemctl status bitnami.service" and "journalctl -xe" for details."I'm not technical and I have no idea what this means. My site cannot be reached now, I tried to reboot my lightsail instance but that didn't do anything. At this point, I just want my site back and I'll just pay for a plugin that does the install for me!Any help in getting my site back would be much appreacated.FollowComment"
SSL certificate install gone wrong
https://repost.aws/questions/QUyF9YyvpTSkWbKzWdFr0TaA/ssl-certificate-install-gone-wrong
false
"0There is a bug in this article. In Step 7.5 for both Approach A and B, the text says DOMAINsudo ln -sf /opt/bitnami/letsencrypt/certificates/DOMAIN.key /opt/bitnami/apache2/conf/bitnami/certs/server.keyIt should be $DOMAINsudo ln -sf /opt/bitnami/letsencrypt/certificates/$DOMAIN.key /opt/bitnami/apache2/conf/bitnami/certs/server.keySo wherever you see DOMAIN.key and DOMAIN.cert , change to $DOMAIN.key and $DOMAIN.cert . Do it for all the commands under Approach A and Approach B (but following only the section relevant for you).Once done, runsudo /opt/bitnami/ctlscript.sh startCommentShareAlexaanswered a year agoDan a year agoThanks! So what step should I restart the tutorial from? Step 2?Share0Hi,Also struggling with this. I have followed the advice above and restarted the process from step 3 but still, the following shows:bitnami@ip-172-26-12-111:~$ sudo mv /opt/bitnami/apache2/conf/bitnami/certs/server.crt /opt/bitnami/apache2/conf/bitnami/certs/server.crt.oldbitnami@ip-172-26-12-111:~$ sudo mv /opt/bitnami/apache2/conf/bitnami/certs/server.key /opt/bitnami/apache2/conf/bitnami/certs/server.key.oldbitnami@ip-172-26-12-111:~$ sudo ln -sf /opt/bitnami/letsencrypt/certificates/$DOMAIN.key /opt/bitnami/apache2/conf/bitnami/certs/server.keybitnami@ip-172-26-12-111:~$ sudo ln -sf /opt/bitnami/letsencrypt/certificates/$DOMAIN.crt /opt/bitnami/apache2/conf/bitnami/certs/server.crtbitnami@ip-172-26-12-111:~$ sudo /opt/bitnami/ctlscript.sh startStarting services..Job for bitnami.service failed because the control process exited with error code.See "systemctl status bitnami.service" and "journalctl -xe" for details.bitnami@ip-172-26-12-111:~$ ``Any advice would be appreciated.Thanks,JackCommentShareJack155Q4answered a year ago0Is there another error where it refers to /certificates/ rather than /certs/?I spotted a note, however, skipping this, which I had not done before did not help.**Note**Step 5 applies only to instances that use the Ubuntu Linux distribution. Skip this step if your instance uses the Debian Linux distribution.CommentShareJack155Q4answered a year ago"
"Hi,I am looking for information explaining how to access DDR memory from host, but NOT from logic contained in the CL.Here is the background:Using 2 FPGA instance, I was able to do send data from 1st FPGA pciemaster to the 2nd FGPA ddr.Now I want to read the 2nd FPGA DDR data and do data compare, not hardware compare as implemented in cl_tst module.Please advice on to achieve this.Regards,VenkatFollowComment"
How to access DDR memory from the host
https://repost.aws/questions/QUhg7nMKbsTQyQXIVrmWo3mQ/how-to-access-ddr-memory-from-the-host
false
"0Hello,I am assuming your design uses PCIS interface from Shell to provide datapath to the Host Memory. Our example CL_DRAM_DMA has this datapath where PCIS interface feeds an interconnect which then connects to all four DDR. Host can access the PCIS interface (and hence the DDR) through PF0 BAR4 address exposed to the Host as shown below:https://github.com/aws/aws-fpga/blob/master/hdk/docs/AWS_Fpga_Pcie_Memory_Map.md#memory-map-per-slotAWS provides peek/poke APIs to access the PF/BARs from the host. You should be able to access PF0-BAR4 using APIs below:https://github.com/aws/aws-fpga/blob/master/sdk/userspace/fpga_libs/fpga_pci/fpga_pci.c#L337https://github.com/aws/aws-fpga/blob/master/sdk/userspace/fpga_libs/fpga_pci/fpga_pci.c#L385Please let us know if you run into any issues or have any questions.Thanks!ChakraCommentShareawschakraanswered 2 years ago0Hi Chakra,I was able to access either FPGA's BAR4.Thanks for pointing to the API's and doc.Regards,VenkatCommentSharevenkubanswered 2 years ago"
Hello.My primary Postgres (15.2) rds is in Mumbai region. I am trying to copy the snapshot in Hyderbad region but it does not show up in the "Destination Region" drop-down. Do I need to do anything to enable RDS region?I got the Hyderabad region enabled about a week ago and I can see it from the region drop-down in the top navbar.FollowComment
not able to copy rds snapshot to Hyderabad region
https://repost.aws/questions/QU9ZJ6mwNSRmCH7SCUy934nQ/not-able-to-copy-rds-snapshot-to-hyderabad-region
false
"0If no other errors were given, this could be a timing issue. From the docs:Depending on the AWS Regions involved and the amount of data to be copied, a cross-Region snapshot copy can take hours to complete. In some cases, there might be a large number of cross-Region snapshot copy requests from a given source Region. In such cases, Amazon RDS might put new cross-Region copy requests from that source Region into a queue until some in-progress copies complete. No progress information is displayed about copy requests while they are in the queue. Progress information is displayed when the copy starts.CommentShareEXPERTkentradanswered a month ago"
"I have a sagemaker notebook instance having two jupyter notebook ipynb files.When I had one jupyter notebook, I was able to run it automatically with one lambda function trigger and lifecycle configuration.Now I have two jupyter notebooks and corresponding two lambda function triggers. How can I run them based on the trigger by changing the lifecycle configuration script.The trigger is file uploading into S3. Based on what location the file is added, the corresponding jupyter notebook should runFollowComment"
Run different notebooks present in same Sagemaker notebook instance using lifecycle configurations based on different lambda triggers
https://repost.aws/questions/QUGrSiVuFAS_WuZDVTUmFdlA/run-different-notebooks-present-in-same-sagemaker-notebook-instance-using-lifecycle-configurations-based-on-different-lambda-triggers
false
"1For the use case of automatically running ML jobs with on-demand infrastructure, with the ability to accept input parameters, I'd recommend SageMaker Processing as a better fit than Notebook Instances + Lifecycle Configs.With processing jobs:You could still use notebook files if needed, using a tool like Papermill or a more basic pattern like just loading the file and running through the code cells. For example using a FrameworkProcessor, you should be able to upload a bundle of files to S3 (including your notebooks and a plain Python entrypoint to manage running them).You could trigger processing jobs from events just like your current notebook start-up, but could provide many different parameters to control what gets executed.The history of jobs and their parameters will be automatically tracked through the SageMaker Console - with logs and metrics also available for analysis.You wouldn't be limited to the 5 minute time-out of a LCConfig scriptIf you really needed to stick with the notebook pattern though, modifying the LCConfig each time seems less than ideal... So maybe I'd suggest bringing in another external state you could use to manage some state: For example have your LCConfig script read a parameter from SSM or DynamoDB to tell it which notebook to execute on the current run?CommentShareEXPERTAlex_Tanswered a year ago0How about putting two notebooks in two different notebook instances?CommentShareminniesunanswered 8 days ago"
"Hi Everyone,Our application requires MariaDB 10.2 (or greater) or MySql 8 (or greater) for the use of recursive CTEs.Is there any news on when this will be available to us via Aurora Serverless?Thanks.FollowComment"
When will Mysql 8 be available in Aurora Serverless?
https://repost.aws/questions/QUU1WDdt61RdawJ8Oq7Ot01A/when-will-mysql-8-be-available-in-aurora-serverless
false
"0Any new version is only considered for release after rigorous testing and compatibility checks to ensure we provide the best possible experience to our customers. It is for this reason providing ETAs for product releases is difficult most of the time.If its a major blocker for you, I would advice you to reach out to Support with your business justification and see if they can provide you with some answer.CommentShareD-Raoanswered a year agoD-Rao a year agoAnd also, keep an eye on below to stay updated with the announcements.https://aws.amazon.com/blogs/aws/https://aws.amazon.com/new/#database-servicesShare"
"The docs for the Cognito ForgotPassword API call say that Lambda triggers for pre-signup, custom message, and user migration will be invoked. I have a pre-signup trigger that works as expected on the SignUp action, but it is not being called when I submit a ForgotPassword action. [These docs] show trigger source values for CustomMessage_ForgotPassword and UserMigration_ForgotPassword, but there is no PreSignUp_ForgotPassword or similar. Is this a bug? Thank you!FollowCommentjamessouth a year agoI described a work-around here: https://github.com/aws-amplify/amplify-js/issues/8376#issuecomment-1036896944Share"
Cognito Pre-Signup Lambda trigger not being called on ForgotPassword
https://repost.aws/questions/QU-jR8K1zTQh6NOv8mXUXDlw/cognito-pre-signup-lambda-trigger-not-being-called-on-forgotpassword
false
"I have a WordPress installation on a Lightsail Bitnami instance. When I edit or create from WP Admin, I lose connection with my site altogether and the only way to re-establish connection is to reboot the instance on the AWS Lightsail console. When not editing, the same happens in approximately 24 hours. At times, I may need to reboot multiple times or even Stop / Start the instance to get it back. Any help will be appreciated.FollowComment"
Lightsail instance needs to be constantly rebooted
https://repost.aws/questions/QUIgBY4Gw1QKC-JDWtmmbwCw/lightsail-instance-needs-to-be-constantly-rebooted
false
"0What size instance are you running?The smaller instance sizes are not sufficient to run a web and database server.CommentShareDavid Ganswered a year ago0Hello David G. My apologies for not responding a month ago, I was not informed that you responded?I am using: 512 MB RAM, 1 vCPU, 20 GB SSDCommentShareAWS-User-1108793answered a year ago0Also, how do I upgrade to a suitable instance? Your help is appreciated.CommentShareAWS-User-1108793answered a year ago"
I am not able to get the custom metrics in explorer widget. However I can see the custom metrics in other widgets but not in explorer widget. any suggestions on thisFollowComment
custom metrics are not listing on metric explorer in cloud-watch
https://repost.aws/questions/QU7yiKK3QZQVCWxM_xQSkAdg/custom-metrics-are-not-listing-on-metric-explorer-in-cloud-watch
false
"Hello Everyone, I have a lambda function which will be triggered based on Dynamodb events. My lambda will push the document received from the dynamo to Opensearch using its API. The setup worked fine but it started throwing 403 forbidden errors from last 3 days. Please help if anyone faced similar issues. Thanks!FollowComment"
403 Forbidden issue from Lambda accessing Opensearch API
https://repost.aws/questions/QU-MCm9fN4SIGbHePtIbZDFw/403-forbidden-issue-from-lambda-accessing-opensearch-api
false
"1Hello,403 Forbidden occurs due to :-index_create_block_exceptionIssue with Access policyAs you already mentioned that the setup was working fine previously but started throwing 403 forbidden errors from last 3 days. Therefore, it could be due to "index_create_block_exception".Thus, in-order to fix this issue kindly refer to documentation here and check :-If your cluster is having "Lack of free storage space". Lack of free storage space can be easily checked from Opensearch metrics "FreeStorageSpace" for more information on this metric you can refer to cluster metrics doc.If the cluster is having high JVM memory pressure.If your cluster is not having any of the above-mentioned issues. Kindly check the exact error and see if it is similar to blocked by: [FORBIDDEN/10/cluster create-index blocked (api)] or if it is due to Access policy issue.Thank you!CommentShareSUPPORT ENGINEERHitesh_Sanswered 8 months ago"
"I am doing this tutorial Using Code Snippets to Create a State to Send an Amazon SNS message and successfully created a step function. When I go to run the step function it shows it succeeded, but I did not get the message on my cell phone. I have followed these instructions exactly as shown here.Here is my code:{ "Comment": "A Hello World example of the Amazon States Language using Pass states", "StartAt": "Hello", "States": { "Hello": { "Type": "Pass", "Result": "Hello", "Next": "World" }, "World": { "Type": "Pass", "Result": "World", "Next": "Send message to SNS" }, "Send message to SNS": { "Type": "Task", "Resource": "arn:aws:states:::sns:publish", "Parameters": { "Message": { "Input": "Hello this is a text message from Christian's AWS Step function CodeSnippetsCreateStateSendAWSSNSMessageCoool." }, "PhoneNumber": "+11234567890" }, "End": true } }}Any ideas?FollowComment"
Step Function Not Sending SNS Message
https://repost.aws/questions/QUgv1JNRZNTju8dty_L3zAVQ/step-function-not-sending-sns-message
false
"0Please check the following AWS support page for debugging tips - https://aws.amazon.com/premiumsupport/knowledge-center/sns-troubleshoot-sms-message-failures/Also if you are looking for a good workshop to learn Step Functions, please take a look at - https://catalog.workshops.aws/stepfunctions/en-US/CommentShareEXPERTIndranil Banerjee AWSanswered a year ago"
"My application receives data via email using SES to SNS to HTTPS sub pipeline. In the POSTed SNS JSON I get the complete message (the contents transacted after the SMTP DATA verb), but I do not see an attribute in the JSON that identifies the envelope recipient(s).This is critical because some SMTP client implementations batch a message to multiple recipients at the same domain in a single transmittal (Gmail) and some perform an SMTP transaction for each recipient separately (Yahoo).Identifying the envelope recipient provided with RCPT TO: verb(s) determines the intended recipients for that transaction, irrespective of the destination headers (To:, CC:, and other non-RFC variants).Without the envelope recipient data in the POST, BCC: deliveries cannot be supported, a lot of fuzzy logic is required to identify duplicate deliveries, and a separate dictionary must be maintained on the callback server to match the "accepted domains" list associated with the SES receiver and prevent attempting to process messages for non-accepted recipient domains.Has anyone found a solution to this shortcoming?FollowComment"
Identifying enveloper recipients (RCPT TO:) with SES Receiving Mail
https://repost.aws/questions/QU196loRtKRFyP0XI-Q_BzeQ/identifying-enveloper-recipients-rcpt-to-with-ses-receiving-mail
false
"0Answer provided by another forum:The envelope recipients are sent in the POST JSON, receipt.recipients[] attribute.See https://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-notifications-contents.html#receiving-email-notifications-contents-receipt-object.CommentSharemdibellaanswered 2 years ago"
"Customer has circular objects in their data. Does Ground Truth support drawing circles rather than boxes out of the box (no pun intended)? I know that it supports semantic segmentation, but that is overkill in this case.FollowComment"
Does Ground Truth Support Circles?
https://repost.aws/questions/QUqVY0A3PIQsuJpcce1tjJcA/does-ground-truth-support-circles
true
"0Accepted AnswerAs of July 2020, we currently have Crowd HTML Element support for bounding box and polygons.CommentShareAWS-User-9193990answered 3 years ago"
"HI Guys,New to AWS CDK, so please bear with me.I am trying to create multi-deployment CDK, wherein the Dev should be deployed to Account A and prod to Acocunt B.I have created 2 stacks with the respective account numbers and so on.mktd_dev_stack = Mktv2Stack(app, "Mktv2Stack-dev", env=cdk.Environment(account='#####', region='us-east-1'), stack_name = "myStack-Dev", # For more information, see https://docs.aws.amazon.com/cdk/latest/guide/environments.html )the Prod is similar with the Prod account and different name.When I run them I plan on doingcdk deploy Mktv2Stack-devand simiar for prod.I am using the cdk 2.xx on PythonWhat my question is, does this setup give me an ability to pass a parameter, say details which is a dict object of names and criteria for resources that will be set up ? Or is there a way for me to pass parameter/dict from app.py to my program_name.py so that I can look up values from the dict and set them to resources accordingly.RegardsTanmayFollowComment"
Creating Dev and Prod deployments using CDK
https://repost.aws/questions/QUIJtAO3g4R5GIveSPStyDtQ/creating-dev-and-prod-deployments-using-cdk
true
"1Accepted AnswerHi and welcome :),Yes, you can pass additional arguments to the class. See the following class definition as an example:class Mktv2Stack(cdk.Stack): def __init__( self, scope: cdk.Construct, id_: str, *, database_dynamodb_billing_mode: dynamodb.BillingMode, api_lambda_reserved_concurrency: int, **kwargs: Any, ):You can then call it from app.py as follows:Mktv2Stack( app, "Mktv2Stack-dev", env=cdk.Environment(account=ACCOUNT, region=REGION), api_lambda_reserved_concurrency=constants.DEV_API_LAMBDA_RESERVED_CONCURRENCY, database_dynamodb_billing_mode=constants.DEV_DATABASE_DYNAMODB_BILLING_MODE,)I'd recommend to look at the Recommended AWS CDK project structure for Python applications blog post and Best practices for developing and deploying cloud infrastructure with the AWS CDK for additional examples and details.CommentShareAlex_Panswered a year ago"
"I made a custom qualification task using python and when the worker starts "take test" they will get an external link where I put my custom qualification task. After completion, they got a code and they need to put the code in the input form that I provided for them in the qualification submission form. After that in the worker portal, it is showing "Qualification submitted successfully and pending approval from the requester" in the pending qualification section. Now, in the requester account, I should get the worker id under the "workers" tab. But there is no worker id is there. As I do not get any approval request, I made a worker id by myself to check what is the issue. That's what I found out. If anyone has any idea/solution please let me know.FollowComment"
Worker ID is not showing up in the requester portal for approval
https://repost.aws/questions/QUxE_3jhUTRbuxPu72syBroA/worker-id-is-not-showing-up-in-the-requester-portal-for-approval
false
"hi when executing the below code this error is thrown "Could not load type 'Amazon.Runtme.Endpoints.IEndpointProvider' from assembly 'AWSSDK.Core, Version=3.3.0.0, Culture=neutral, PublicKeyToken=885c28607f98e604'." Code -> "try { client = new AmazonSecurityTokenServiceClient(); var response = client.AssumeRoleAsync(new AssumeRoleRequest { RoleArn = "arn:aws:iam::712090922:role/TestRole", RoleSessionName = "newsessionanme2" }); Credentials credentials = response.Result.Credentials; string access = credentials.AccessKeyId; AWSCredentials awsCredentials = GetAWSCredentials(credentials.AccessKeyId, credentials.SecretAccessKey, credentials.SessionToken); AmazonS3Client clientS3 = new AmazonS3Client(awsCredentials, Amazon.RegionEndpoint.USEast1); var s3Response = clientS3.ListBuckets(); Console.ReadLine(); } catch (Exception ex) { Console.WriteLine(ex.Message); Console.ReadLine(); }"I am running this on .Net framework 4.8 with AWSSDK.Core , AWSSDK.SecurityToken ,AWSSDK.S3 nuget packages.The code throws exception during the client object creation of AmazonSecurityTokenServiceClient();FollowComment"
"Could not load type 'Amazon.Runtime.Endpoints.IEndpointProvider' from assembly 'AWSSDK.Core, Version=3.3.0.0, Culture=neutral, PublicKeyToken=885c28607f98e604'."
https://repost.aws/questions/QUIN1_wOUWT-ic1yU-G-vS3Q/could-not-load-type-amazon-runtime-endpoints-iendpointprovider-from-assembly-awssdk-core-version-3-3-0-0-culture-neutral-publickeytoken-885c28607f98e604
false
"I am trying to get a handle on how to you define an ALB, its Listeners, Target group and Security groups in a CF Template. So I wrote out this sudo code listing. Is this correct if the ALB is Internal, listening on port 443 for traffic and sending that traffic to port 80 on the instance webserver?ALBProperties: Type: internal Listener: 80 Listener: 443 Subnets SecurityGroups LBAttributesALBListener80Properties: Reference: ALB Port: 80 Redirect rule to port 443ALBListener443Properties: Reference: ALB Port: 443 SSL Policy Certificate Forward rule to ALBTarget80ALBTarget80Properties: Port: 80 VPCid TargetgroupAttributes Registered instance(s) Healthcheck Check port 80ALBSecurityGroupIngress rules: Allow port 80 from VPC CIDR Allow port 443 from VPC CIDREgress rules: Allow port 80 to InstanceSecurityGroup Allow port 443 to InstanceSecurityGroup Allow All traffic to 127.0.0.1/32InstanceSecurityGroupIngress rules: Allow port 80 from VPC CIDR Allow port 443 from VPC ALBSecurityGroupEgress rules: Allow all to 0.0.0.0/0Am I looking at this correctly?FollowComment"
Correct way to define ALB and the security groups needed in CF Template
https://repost.aws/questions/QU3Fr98ZgISWisSyFlX94GQA/correct-way-to-define-alb-and-the-security-groups-needed-in-cf-template
true
"0Accepted AnswerHi, that's roughly right.Your ALBSecurityGroup only needs egress on port 80 to the InstanceSecurityGroup.Your InstanceSecurityGroup only needs ingress on port 80 from the ALBSecurityGroup.Your InstanceSecurityGroup doesn't need any egress rules for this purpose, but may need some to support its functionality.CommentShareEXPERTskinsmananswered 2 months agoEXPERTTushar_Jreviewed 2 months ago"
"I've created a custom component that downloads a ecr private docker and run 1 program as"Lifecycle": {"Run": "docker run --cap-add=SYS_PTRACE --runtime=nvidia -e DISPLAY=$DISPLAY --privileged --volume /tmp/.X11-unix:/tmp/.X11-unix --net=host -e NVIDIA_VISIBLE_DEVICES=all -v $HOME/.Xauthority:/root/.Xauthority -v /run/udev/control:/run/udev/control -v /dev:/dev -v /sys/firmware/devicetree/base/serial-number:/sys/firmware/devicetree/base/serial-number -e NVIDIA_DRIVER_CAPABILITIES=compute,utility,graphics xxxxxxxxx.dkr.ecr.ap-southeast-2.amazonaws.com/smartdvr:latest my-program" }, ...I've got the above running, however if i manually stop the docker, or the my-program crashes, i don't see it auto-restarting.what is the usual way to make sure the docker stays running and the application restarts if the program crashes for example ?is there an option in the custom component that i can set ?or does everyone just start their program as a linux service and let the service handle the restart ?FollowComment"
best practises for auto restarting docker application
https://repost.aws/questions/QUukKrU8PdQsWeH77riYSesw/best-practises-for-auto-restarting-docker-application
false
"0Hi clogwog,Thanks for using Greengrass V2. Greengrass automatically restarts components 3 times if the component Run lifecycle processes exits with an error the component goes to ERRORED state. If it doesn't recover in those 3 attempts the component will be put in BROKEN state and won't be auto-restarted, and you will need to deploy a fix for that issue. If you're application crashes and the docker run command exits in this manner your docker component will also be restarted. However, if the error is never reported to greengrass in this way and the docker container keeps running or exits with code 0, then Greengrass will not know about the issue and won't restart. You can check your components log file and greengrass.log file to check if the component follows this path. This is likely what you're looking for since you want to rerun the container when the containerized application crashes.If you manually stop the container then greengrass does not have knowledge of that, the run command will also finish with 0 exit code in that case which is treated by Greengrass as success.Another mechanism for restarting a component is via the greengrass cli, but that needs to be done by logging into the device. https://docs.aws.amazon.com/greengrass/v2/developerguide/gg-cli-component.html#component-restartPlease update this thread if that does't address your concern and provide component and container logsThanks,ShaguptaCommentShareshagupta-awsanswered 2 years ago"
"Currently using the AWS Amplify DataStore to build an offline enabled application using react native and expo, but my data is a bit complex because it includes primitive data types(like string, number, array), also includes some files(pdf, images, doc). I want the feature to save document in offline mode, the application data is working in offline mode as per design. When the user comes back online , the documents will be uploaded to S3, offline should save it to the application (upload docs feature)Read AWS documents, the AWS Mobile SDK for android mentioned that we can allow the offline feature including documents for Android (Storage - Using GraphQL API - Android - AWS Amplify Docs:https://docs.amplify.aws/sdk/storage/graphql-api/q/platform/android/#client-code). BUT CANNOT FIND THIS FEATURE FOR JAVASCRIPTIf anyone can help -PLEASE I AM STUCK! :-)Thank you so much.FollowComment"
Document Upload in offline mode? (S3 using AWS DataStore )
https://repost.aws/questions/QUA1JvGfnVR1CopmAi5X7rAQ/document-upload-in-offline-mode-s3-using-aws-datastore
false
Hi! I have a public AWS video file here: https://s3-external-1.amazonaws.com/media.twiliocdn.com/ACb2722d11b73d22b594c81d79aed6b8d2/23ff33d3428202c6a24e7a8c6e5f4140It only opens on Safari and won't open on Chrome (which I desperately need). I've tried removing all my extensions and using incognito mode and even clearing my cache and cookies but nothing helps. Any ideas?FollowCommentalatech EXPERT2 months agoWhat does Chrome web console return as error?ShareDakota 2 months agoYou can click and it and see for yourself but there isn't really an error... it just is blank but in safari it works..Share
AWS S3 File Not Opening on Chrome but only on Safari
https://repost.aws/questions/QUg2b-8aj7T36NhpcyhK9Jkg/aws-s3-file-not-opening-on-chrome-but-only-on-safari
false
"-1Greetings,The issue you're experiencing could be due to the video format or the CORS (Cross-Origin Resource Sharing) settings on the S3 bucket that hosts the video file.The video file you provided is in QuickTime format (file extension .mov), which might not be supported natively by Google Chrome. Safari supports .mov files as it's developed by Apple, but Chrome typically supports MP4, WebM, and Ogg formats. To make the video compatible with Chrome, you can convert the video to a more widely supported format like MP4 using wide known tools.Please let me know if I answered your questionCommentShareZJonanswered 2 months agoDakota 2 months agoCan I actually convert the file if it is already on AWS though? I want my clients to just be able to click that link and the video to open up...ShareNuno_Q 2 months agoYes, you can use AWS Elemental Mediaconvert https://aws.amazon.com/mediaconvert/ to transcode your file to MP4 file format and H264 CodecShareDakota 2 months agoI don't know how to convert the file if it is already in a public AWS file that I don't own. Twilio is the owner of this file... any thoughts?ShareNuno_Q 2 months agoFollow this https://github.com/aws-samples/aws-media-services-simple-vod-workflow/blob/master/2-MediaConvertJobs/README.md. In the input you can use the HTTPS URL of the file and create a MP4 output that will be written to your S3 bucketShare-1Your Video Format is H263 and most likely is not supported by the Chrome player you are using.The file plays fine on VLC or Safari so there is nothing wrong with the file.You can change the default video player of Chrome to use VLC or other player that supports the codec of this file, just follow the instructions on this article https://www.getdroidtips.com/change-default-video-player-browser/CommentShareNuno_Qanswered 2 months agoDakota 2 months agoI tried that but those extensions aren't working for me.. hmm any other suggestions?Share"
"Hi,We have a requirement for data backup from 1TB up to 40TB from our servers during a site decommission and retain it for a period of 30-90 days and would need access to it occasionally.Already tried S3 but that's not scalable for our requirement considering it takes huge time for upload and choke site bandwidth, We are trying to understand how snowcone or snowball works and the cost involved.FollowComment"
Data backup solution
https://repost.aws/questions/QUIavnvqBnQhKOzDzN1djpzg/data-backup-solution
false
"0Hello,Have you considered an AWS Backup? If your data resides in AWS Cloud, You can use AWS Backup.https://aws.amazon.com/backup/faqs/?nc1=h_ls, https://aws.amazon.com/backup/faqs/?nc=sn&loc=6Or, If your data is in On-Prem, You can consider AWS Storage Gateway.https://aws.amazon.com/storagegateway/?nc1=h_ls, https://aws.amazon.com/storagegateway/faqs/?nc=sn&loc=6As you know, Snowball is for offline migration. 1. AWS will send a Snowball device to you. 2. Then, you have to upload your data to the device. 3. Ship the device back to AWS. 4. After that, the data of the device will be uploaded to S3.Also, You can refer of Snowball cost in the following link.https://aws.amazon.com/snowball/?nc1=h_ls, https://aws.amazon.com/snowball/pricing/To get a more specific response, I think that you should explain a bit more clearly about your environment and requirements.Thank you.CommentShareSeungYong Baekanswered 7 months agorePost-User-7159555 6 months agoTo be more precise on our requirement.We have NVR servers (windows) at our corp offices used for security camera recordings.During a site decommission there is a need to retain data on these NVR servers for a period of 30-90 days.Security team would need access to this data in an event of investigation.As we need a full backup of the recordings, Backup can be initiated only after network decommission. In this case a offline solution is needed.Share0Thank you for your update about the requirements.I think that this blog is helpful for your environment and requirements.https://aws.amazon.com/ko/blogs/storage/collecting-archiving-and-retrieving-surveillance-footage-with-aws/If you implement Amazon S3 File Gateway in your office locations,The NVR servers can use SMB protocol to access the S3 File Gateway volume.The NVR servers and security team will be able to access the recording data directly because S3 File Gateway provides a maximum 64TB of local cache.Recording data will be transferred to the designated S3 bucket automatically and you can access this bucket as well.Also, you will be able to backup or archive the recording data to another S3 class for long-term retention through AWS Backup or ILM rules in the cloud.Finally, you can refer to S3 File gateway quotas.https://docs.aws.amazon.com/filegateway/latest/files3/fgw-quotas.htmlCommentShareSeungYong Baekanswered 6 months ago0You can use Snowball Edge devices to copy all this data to and keep the devices for the full 30-90 day period. Here is the pricing page for that: https://aws.amazon.com/snowball/pricing/You get the first 10 days included and then you're charged a fee per extra day you want to keep it. With Snowball Edge, you can copy all your data to the device and then access it locally if you need to via NFS or the S3 interface. You can also send the device back to us and we can upload/import all that data into S3 on your behalf, which would be considerably faster than uploading that data through an internet connection. Once it's in S3, you can do all sorts of stuff with it and use that data in a myriad of ways using other AWS services that work on top of S3.CommentShareKrishnaanswered 4 months ago"
"I'm basically using an example from this page to download a file from s3. If it matters, I'm using it in a ruby_block in chef. it works, but it echo's the file contents to the console. I'm trying to avoid that.https://aws.amazon.com/blogs/developer/downloading-objects-from-amazon-s3-using-the-aws-sdk-for-ruby/reap = s3.get_object({ bucket:'bucket-name', key:'object-key' }, target: file)The above is downloading the file, but logging everything to console.I've tried numerous configs setting in the class list for Aws::S3::Client.new and get_object, but nothing seems to be suppressing body getting logged to console.Could anyone here advise?ThanksEdited by: Kevin Blackwell on Mar 16, 2020 8:50 AMFollowComment"
Using aws-sdk-s3 get_object to stream file to disk
https://repost.aws/questions/QU-jJf50uVQVuuZ42AybFCfQ/using-aws-sdk-s3-get-object-to-stream-file-to-disk
false
"0My bad, had to remove http_wire_trace, setting it to false didn’t seem to work for me, but removing worked.Live and learnCommentShareKevin Blackwellanswered 3 years ago0Answered own questionCommentShareKevin Blackwellanswered 3 years ago"
"I'm setting up an on-premise CodeDeploy Agent.The instructions here have an issue.I completed Step 2: Generate temporary credentials for an individual instance using AWS STSI'm trying Step3: Add a configuration file to the on-premises instanceThe instructions say to use the iam-session-arn noted in Step 2. However, there is no session arn to note in Step 2. There's a session name, there's a role arn. No session arn.What is the session arn that is being referred to?FollowComment"
"Premise CodeDeploy Agent, session arn"
https://repost.aws/questions/QUl390l9NUSfCTn3qxBfTUMA/premise-codedeploy-agent-session-arn
true
"0Accepted AnswerUse the role-arn.My problem was my region was set to upper-case US-EAST-1, and was causing the error "Credential should be scoped to a valid region" Once I set the region to be us-east-1. The agent started working correctly.CommentSharerePost-EricPanswered 4 months ago"
"Question: Can i use Id token, access token, refresh token in User pool to identity pool?i making code login to Developer authenticated identitiesbut official document, i read Using Token on Amazon User pool no have Token in Amazon Identity poolif login Developer authenticated identities and expire access token, then Automatically provide access Token without making the user restart it?FollowComment"
"Can i use Id token, access token, refresh token in User pool to identity pool?"
https://repost.aws/questions/QUQHhzLuejTGebBSBAb6-e3Q/can-i-use-id-token-access-token-refresh-token-in-user-pool-to-identity-pool
false
"0Your question is a bit unclear. But when you pass your User Pool token to the Identity Pool, the Identity Pool calls the STS AssumeRoleWithWebIdentity API call to return temporary access credentials to the user.[1] The Identity Pool doesn't do anything else with OpenID token provided by the User Pool.[1] https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CognitoIdentityCredentials.htmlCommentShareAWS-User-7083746answered 9 months ago"
"Are there plans to have iOS / Android SDK for GameSparks?Is it possible to access FlexMatch through GameSparks CloudCode?- if yes, how?- if no, then how to use FlexMatch in iOS/Android?FollowComment"
GameSparks and FlexMatch inquiries
https://repost.aws/questions/QUfwiIxhIMS6yG6BOFKqXX4w/gamesparks-and-flexmatch-inquiries
false
"0Amazon GameSparks is currently in preview, and we are looking into integrating many services to CloudCode in the future to enrich game developers' experience, including a FlexMatch extension. Though we cannot guarantee any timeline at the moment. Please keep an eye out for related news on the GameTech blog as we announce GameSparks/GameLift updates. https://aws.amazon.com/blogs/gametech/CommentShareJamesM_AWSanswered a year ago"
"I have a small resource group (5 instances), and there's a weekly script I would like to run on all 5 of them. They start automatically when their users need to interact with them, and stop automatically when idle for 30 minutes. (Similar to a Cloud9 IDE environment, but not managed by Cloud9.) The maintenance window is scheduled for a time when they would all be stopped. There is no reliable time when I know for sure that they would all be running.When the weekly maintenance window rolls around, I would like SSM to start the instances, run the shell script on them, and then stop them. I thought I had managed it, by creating a maintenance window with three tasks:Priority 10: AUTOMATION task using the AWS-StartEC2Instance documentPriority 20: RUN_COMMAND task using the AWS-RunShellScript documentPriority 30: AUTOMATION task using the AWS-StopEC2Instance documentAll three tasks specify the resource group as the target, and {{RESOURCE_ID}} as the InstanceId parameter.The problem is that each task only runs on instances that are already running when the maintenance window starts, which would seem to make the AWS-StartEC2Instance automation kind of pointless. When there are no instances running, each task has a status of No invocations to execute.I could certainly use an EventBridge rule with a Lambda to make sure the instances are up and running, and then just have the RunShellScript task in the maintenance window. But the use case I described above seems like it would be a no-brainer easy thing to accomplish, so I'm concerned I'm just missing something simple. Any suggestions?FollowComment"
Starting targeted instances during a maintenance window?
https://repost.aws/questions/QUSAlxEzG1QCO-NmC_6yY0Uw/starting-targeted-instances-during-a-maintenance-window
false
"0Hello mbklein,Thank you for reaching out with your query.Below are the common possible reasons for this issue:No defined maintenance window targetsNo resource id is present.When required policies are not correct.For tasks which require unique identifiers for input. if there are no targets, the task will report back that there are no invocations to execute, as there were no inputs.To troubleshoot the issue, I performed this scenario in my internal lab and was able to successfully perform below mentioned tasks.AUTOMATION task using the AWS-StartEC2Instance documentRUN_COMMAND task using the AWS-RunShellScript documentAUTOMATION task using the AWS-StopEC2Instance documentPlease refer to the steps I performed below:Scenario1: When instances are already in stopped stateCreate a tag for your instance i.e. (Name = Department Value = Dev)Create a Resource Group [1], for group type select "Tag based"For resource types select "AWS::EC2::Instance"Next select the tag key (Department) and value (Dev)After the Resource Group is created, you can create a Maintenance Window and register the target with the Maintenance WindowOn Maintenance Window go to “Actions” > “Register targets”Select "Choose a resource group"Select the Resource group you created earlierThen click on “Register Target”Then click on "Actions" dropdown ->"Register Automation taskNext under Automation document-> "AWS-StartEC2Instance"Under Targets choose "Selecting registered target groups" and then select the Windows target IDFor Input parameters > "InstanceIDs" input parameter, Add all instance IDs separated by comma which you have mentioned in resource group. for example : i-abc********, i-def********For Input parameters > "AutomationAssumeRole" input parameter, input the role that you have configured for AutomationBelow is the AutomationAssumeRole policy I used in my lab:AutomationAssumeRole:Trust Entity{"Version": "2012-10-17","Statement": [{"Sid": "","Effect": "Allow","Principal": {"Service": "ssm.amazonaws.com "},"Action": "sts:AssumeRole"}]}Inline Permission Policy:{"Version": "2012-10-17","Statement": [{"Sid": "VisualEditor0","Effect": "Allow","Action": ["ec2:StartInstances","ec2:StopInstances","ec2:DescribeInstanceStatus"],"Resource": "*"}]}For IAM service role :To create IAM service role, Please follow below stepsGo on IAM Dashboard and click on create RolesUnder "Select trusted entity" click on "Custom trust policy" and add below mentioned policy{"Version": "2012-10-17","Statement": [{"Sid": "","Effect": "Allow","Principal": {"Service": ["ec2.amazonaws.com ","ssm.amazonaws.com "]},"Action": "sts:AssumeRole"}]}Click on nextIn Permissions add below three policiesNote: a) and b) are AWS Managed policies. You can easily search for them but for c) you need to create inline policya) AmazonSSMMaintenanceWindowRoleb) AmazonSSMFullAccessc) IAMPassRolePolicy: For IAMPassRolePolicy add below mentioned inline policy{"Version": "2012-10-17","Statement": [{"Effect": "Allow","Action": "iam:PassRole","Resource": "*","Condition": {"StringEquals": {"iam:PassedToService": "ssm.amazonaws.com "}}}]}You can also follow the AWS video tutorial to know more about Maintenance windows or IAM service role.[+]https://www.youtube.com/watch?v=aR02m1Xsz1E&t=113sPlease repeat above steps for AWS-StopEC2Instance document and AWS-RunShellScript document.Scenario 2: When Instances are already in running stateThe above steps allowed me to successfully execute all the three above mentioned document in a Maintenance Window in both the scenarios.Additionally, After following above steps if issue still exists then I would request you to open a case under our support team for further troubleshooting as we need to check all the associated resources to troubleshoot it further.References[1] https://docs.aws.amazon.com/ARG/latest/userguide/resource-groups.htmlCommentShareSUPPORT ENGINEERNikunj-Ganswered 8 months agombklein 8 months agoThank you for these steps. I notice there's one incomplete sentence that looks kind of important:For Input parameters > "InstanceIDs" input parameterWhat is the value that's supposed to go in the InstanceIDs input parameter when targeting a resource group?Share0Thank you for pointing this out.Add all instance IDs separated by comma which you have mentioned in resource group.for example : i-abc********, i-def********CommentShareSUPPORT ENGINEERNikunj-Ganswered 8 months ago"
Where can I find the actual references to API definitions and descriptions for ModelBiasMonitor and ModelExplainabilityMonitor Classes?I can a find a few mentions in the Amazon SageMaker documentation in the following links.https://sagemaker.readthedocs.io/en/stable/api/inference/model_monitor.htmlhttps://sagemaker-examples.readthedocs.io/en/latest/sagemaker_model_monitor/fairness_and_explainability/SageMaker-Model-Monitor-Fairness-and-Explainability.htmlWhere can I find the actual reference and the code implementation for these Classes?FollowComment
API definition for ModelBiasMonitor and ModelExplainabilityMonitor
https://repost.aws/questions/QU9SPQKzelSUu3dr-D4zaXHQ/api-definition-for-modelbiasmonitor-and-modelexplainabilitymonitor
true
0Accepted AnswerThe actual reference to the classes can be found here:https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/model_monitor/clarify_model_monitoring.py It encapsulates the definitions and descriptions for all of SageMaker Clarify related monitoring classes.CommentShareEXPERTVikesh_Panswered 2 years ago
"Hi Team,I wanted to run spark application built using JDK 11 on EMR Serverless. Since the default image does not have support of JDK 11, I created the custom image based on following links:Use case 2 : https://aws.amazon.com/ru/blogs/big-data/add-your-own-libraries-and-application-dependencies-to-spark-and-hive-on-amazon-emr-serverless-with-custom-images/https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/using-custom-images.htmlThis is the content of my DockerFile ( I have M1 Mac)FROM--platform=linux/amd64 public.ecr.aws/emr-serverless/spark/emr-6.9.0:latestUSER root# install JDK 11RUN sudo amazon-linux-extras install java-openjdk11# EMRS will run the image as hadoopUSER hadoop:hadoopAfter uploading the image on ECR, I created the EMR Serverless application (x86_64) using the same custom image.Next I tried submitting the job with my jar built with JDK 11, however, it failed with following error:The job run failed to be submitted to the application or it completed unsuccessfully.Then, as per the above mentioned second link, I tried giving these two spark configuration while configuring the job:--conf spark.executorEnv.JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.16.0.8-1.amzn2.0.1.x86_64 --conf spark.driverEnv.JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.16.0.8-1.amzn2.0.1.x86_64I am still getting the below error:Job failed, please check complete logs in configured logging destination. ExitCode: 1. Last few exceptions: Caused by: java.lang.UnsupportedClassVersionError: <ClassName> has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0\nException in thread \"main\" java.lang.BootstrapMethodError: java.lang.UnsupportedClassVersionError: <ClassName> has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0...-- Update--I have also tried running the job by specifying JAVA_HOME in configurations like this:{ "applicationConfiguration": [ { "classification": "spark-defaults", "configurations": [], "properties": { "spark.driverEnv.JAVA_HOME": "/usr/lib/jvm/java-11-openjdk-11.0.18.0.10-1.amzn2.0.1.x86_64", "spark.executorEnv.JAVA_HOME": "/usr/lib/jvm/java-11-openjdk-11.0.18.0.10-1.amzn2.0.1.x86_64" } } ]}Am I missing any step?RegardsTapanFollowComment"
EMR Serverless Custom Image not running
https://repost.aws/questions/QU-LEd1uM2Rr2p82ZNOvv9xg/emr-serverless-custom-image-not-running
true
"0Accepted AnswerI am able to run my jar in JDK11 environment. The correct driver environment variable for JAVA_HOME is spark.emr-serverless.driverEnv.JAVA_HOME.Also, the latest JDK that gets deployed is of version : java-11-openjdk-11.0.18.0.10-1.amzn2.0.1.x86_64 and applicationConfiguration is not important.CommentShareTapan Sharmaanswered 21 days ago"
"When will AWS have an AI interface? I am thinking something like a chatbot that I tell it what to do such as create a vpc, create a rest api, create lambda function ect.?FollowComment"
AWS to have an AI interface for console management
https://repost.aws/questions/QUUBIwxU8dQMWj99wgj9Xlaw/aws-to-have-an-ai-interface-for-console-management
false
"Hi,I am unable to access table "Risk_model_user"."All_diss", even through I am a cia & s_cia user.getting error as :Permission denied on S3 path: s3://<s3_bucket_name>This query ran against the "risk_models_user" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 96d89bfb-6691-4eab-9abb-0b67cb3f06dcFollowComment"
Access to the Risk tables
https://repost.aws/questions/QUrS2e_cwhQf-rkDfILUFuCw/access-to-the-risk-tables
false
0Please open a case with AWS Premium Support and provide all the resource details there for further investigation. For future reference - please refrain from posting any of the AWS resource ARNs on the public forum.CommentShareMeghana_Sanswered a year ago
"Hello,I want to start streaming using the MediaLive.Tell me please, for receiving, processing and transmitting a stream, I need to install third-party software, for example, OBS for Linux?Or is this done with own tools MediaLive, SDK or another software?FollowComment"
What software to install on the MediaLive
https://repost.aws/questions/QUGFIWigAKSGm_dXgRCDcgKQ/what-software-to-install-on-the-medialive
true
"0Accepted AnswerBased on your requirements, it does looks like vMix Call Pro Edition would work for you. They have a free watermarked demo version to try outLink: https://youtu.be/opgJTvV4dHULink: https://www.vmix.com/products/vmix-call.aspxThey also have a Multi-view demo here:Link:https://www.youtube.com/watch?v=tvl6v1GX_-ASounds like a fun project!-randyCommentShareRandyTakeshitaanswered 4 years ago0Setting Up the Upstream - The upstream system must be capable of sending a video stream to MediaLive via RTP/RTMP and the stream must be redundantly sent to two different IP addresses.You can setup 3rd party tools, such as OBS Studio as a contribution encoder using RTMP push to MediaLive.Link: https://aws.amazon.com/blogs/media/connecting-obs-studio-to-aws-media-services-in-the-cloud/Processing and transmission of streams can be done within the AWS ecosystem:o Set up OBS Studio as a contribution encoder using RTMP pusho Configure AWS Elemental MediaLive to receive an incoming contribution stream and encode it into a set of adaptive bitrate (ABR) streamso Configure AWS Elemental MediaPackage to connect to AWS Elemental MediaLive outputs, for further processing and packaging, to create a channel that can be viewed on connected devicesO Use the AWS Elemental MediaPackage endpoint as an origin for a CDN such as Amazon CloudFrontAdditional info:Link: https://docs.aws.amazon.com/medialive/latest/ug/getting-started.htmlHope this helps.-randyCommentShareRandyTakeshitaanswered 4 years ago0Great!!! Thanks!!!I read, very useful for my first steps.And a bit more, do I understand correctly, to deliver the stream to Youtube/Twitch it will come from AWS Elemental MediaPackage?CommentShareomega92answered 4 years ago0Both YouTube and Twitch live streaming utilize RTMP, which can be handled completely within AWS MediaLive (i.e. MediaPackage is not required).Also, if you want to Live stream just to YouTube or just to Twitch, you probably don't need AWS MediaLive. However, if you want your Live Stream to broadcast both to YouTube and Twitch concurrently, that is when OBS with AWS MediaLive will become useful.More about when you would consider using AWS MediaPackage with MediaLive:AWS Elemental MediaLive is deeply integrated with AWS Elemental MediaPackage so customers can easily combine live encoding with content origination, dynamic packaging, and live-to-VOD capabilities. To configure an AWS Elemental MediaLive channel with AWS Elemental MediaPackage, simply create a channel with AWS Elemental MediaPackage to get a destination address, then select HLS WebDAV as the output for your AWS Elemental MediaLive channel profile and add the destination address. With AWS Elemental MediaPackage you can create output groups for multiple delivery protocols like HLS and DASH, add DRM and content protection, and a live archive window for DVR-like features.Link: https://aws.amazon.com/mediapackage/faqs/Edited by: rtakeshi on Aug 4, 2019 10:39 AMCommentShareRandyTakeshitaanswered 4 years ago0Sorry, I had to describe my task.We will have from 1 to 6 sources (gamer with OBS) from which we must receive streams, merge them into one and send them concurrently to Twitch and YouTube. Is there enough MediaLive for this task?that is when OBS with AWS MediaLive will become usefulShould I install an OBS (Linux version) on a MediaLive and use it to configure and send a stream? Did I understand you correctly?Thanks for your help!CommentShareomega92answered 4 years ago0From my understanding,OBS will allow multi-cam, but only allows switching between cameras, not combining live streams.OBS is setup on its own server and will push its live stream over RTMP to AWS Elemental MediaLiveAWS Elemental MediaLive is NOT mixing software, it will take in one live video feed and can send the feed to concurrently to YouTube and Twitch.For your architecture, I would recommend researching vMix + AWS Elemental MediaLiveLink: https://www.vmix.comvMix will allow allow you to receive/merge input streams into one. vMix will also allow you to send the output stream to both Twitch and YouTube. However, because live streaming does take up significant bandwitch and CPU, it may result in lagging. That is where AWS Elemental MediaLive will possibly help by taking in the Live video stream from vMix and then sending the stream concurrently to both Twitch and YouTube.Hope this helps.-randyCommentShareRandyTakeshitaanswered 4 years ago0omega92Can you please elaborate on the requirement of merging the 6 feeds into a single feed that you can then send to Twitch & YouTube? What do you mean by needing to "merge" the 6 feeds into one? Does this have to happen in real-time?Within MediaLive you have the capability of creating multiple inputs and attaching them to a single MediaLive channel. Using the MediaLive Schedule (https://docs.aws.amazon.com/medialive/latest/ug/working-with-schedule.html), you can then select any one of the inputs to be used to create the MediaLive output. Using the Input Switch schedule action you can set a schedule, e.g. switch to Input 1 at time A, switch to input 2 at time B, etc.Also, please note that you can't install any SW on the MediaLive instance. You will have to install OBS upstream of MediaLive so that they can send content via OBS to MediaLive.RegardsCommentShareAWS-User-5437070answered 4 years ago0Thank you for your help, now I will describe the task in detail.We hold World of Tanks tournaments, two versus two and a commentator works on the match. The commentator has the main source with OBS and there are 4 sources from the players webcams. I need to put four sources from webcams on the main source, so that it looks like in the picture and send it in one stream simultaneously to YouTube and Twitch.https://imgur.com/7CGoUuWWill the MediaLive help me with this task or am I misrepresenting the tasks for which it is needed?CommentShareomega92answered 4 years ago0Very cool program, thanks! I will research it.I in the message above described my task in detail, please see. Does it really look like vMix?CommentShareomega92answered 4 years ago0omega92Unfortunately MediaLive does not have to capability to do picture-in-picture (PIP) as shown in your screenshot example. This will have to be done upstream of MediaLive.As for what type of device/SW solution you would need upstream of MediaLive:You would need a solution that can accept as input the different player sources, then create a scene with main video and PIP overlays, and then send this content via either RTP with FEC, or RTMP to MediaLive. Perhaps someone on the forum will know of such a solution and would be willing to make a recommendation for you.RegardsCommentShareAWS-User-5437070answered 4 years ago0You would need a solution that can accept as input the different player sources, then create a scene with main video and PIP overlays, and then send this content via either RTP with FEC, or RTMP to MediaLive.And the MediaLive will receive the stream and help him transfer it simultaneously to Twitch/Youtube?CommentShareomega92answered 4 years ago0Thank you very much for the help!))CommentShareomega92answered 4 years ago0omega92Yes, from a single MediaLive channel you can stream to both YouTube and Twitch at the same time.RegardsCommentShareAWS-User-5437070answered 4 years ago0Ypu can Stream from OBS to AWS Elemental MediaLive which is simple.Learn more about it : https://www.apps4rent.com/obs-open-broadcaster-software-streaming-hosting/CommentShareAdrianG001answered 3 years ago0For an AWS blog post on streaming from OBS, see https://aws.amazon.com/blogs/media/connecting-obs-studio-to-aws-media-services-in-the-cloud/CommentShareAWS-User-5437070answered 3 years ago"
"I've backed up 7 on premise VMs to AWS using AWS Backup. By all indications the 7 machine backups were completed successfully 2 days in a row. In attempting to restore a single VM from AWS back to on premise, the error is given:"Invalid request detected during virtual machine creation. Aborted restore job"A solution isn't apparent in the documentation:https://aws.amazon.com/blogs/storage/backup-and-restore-on-premises-vmware-virtual-machines-using-aws-backup/Can anyone offer some clues as to how to troubleshoot through this?Thank you in advance.Vcenter version: 8.0.0.10000Hypervisor VMware ESXi, 7.0.0FollowCommentrePost-User-8645854 7 months agoThank you kindly for answering the question.The documentation indicates the supported version of VMware ESXi is 7.0, which is what we have.The documentation does not seem to address the version of vCenter. Does the version of vCenter matter? Is there any indication of when vCenter 8.x might be supported?Thank you in advance for your guidance.Share"
VMWare: AWS Backup to on-premise virtual machine restore
https://repost.aws/questions/QU2mS4sj2SS7-5w3sDHHLThg/vmware-aws-backup-to-on-premise-virtual-machine-restore
false
"1Hello - Have you successfully restored to a 7.x vCenter? 8.x is not listed as supported.https://docs.aws.amazon.com/aws-backup/latest/devguide/vm-backups.html#supported-vmsCommentShareEXPERTPatrick Kremeranswered 7 months agoPatrick Kremer EXPERT6 months agoAddressing your followup questions - I submitted a documentation update request to also state supported vCenter versions. You don't configure the gateway to connect to ESXi hosts, you configure the gateway to connect to a vCenter Server, so the version does matter.I do not have a timeline for when vCenter 8 will be supported.SharerePost-User-8645854 6 months agoThank you kindly for your follow up note. The additional information is very helpful.Share"
"Hello Community,I am currently learning for the aws database specialty exam/certificate.In my online courses and in the aws documentation for the rds option groups it specifies that you cannot remove permanent options from an option group.https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithOptionGroups.htmlPermanent options, such as the TDE option for Oracle Advanced Security TDE, can never be removed from an option groupBut In my test I was able to delete that permanent option "TDE" from the (disassociated) option group.Is that by design for unused option groups or is the documentation not specific enough?Thanks in advance.HeikoFollowComment"
RDS Option Group - Oracle EE TDE
https://repost.aws/questions/QUYUsTqEzOTe6UK7lsP3YXWA/rds-option-group-oracle-ee-tde
true
"0Accepted AnswerHi Heiko,Permanent options can be removed as long as there are no DB instances or snapshots associated with the option group.You wrote that you had the option group already disassociated; in that case, removing the permanent option worked as designed.In my opinion, what doesn't come across 100% clear in the documentation (the link you've given) is that once you set such an option, it becomes permanent for the database. If you wanted to associate another option group to this database, it must have the (permanent) TDE option in it as well.As a side note, the help text of the AWS Console says so.HTH,UweCommentShareUwe Kanswered 4 months ago0Hi Uwe,thanks four your answer.SincerelyHeikoCommentShareHeikoMRanswered 4 months agoUwe K 3 months agoYou're welcome and all the best for your exam!BTW, I wonder whether the exam questions really reflect this particularity. But I suppose they assume a (more typical) setup where associated DBs or snapshots still exist. However, if you should encounter a discrepancy between your observations above and a related exam question, please give a feedback to the training&certification team.Share"
How do you set report_host / report-host on a MySQL or MariaDB RDS instance for replication from an external source? Or something that will appear in show replicas.FollowComment
Setting report_host on MySQL/MariaDB RDS instance for replication from external host
https://repost.aws/questions/QUJA_AJt6IR7aWMviAzR_fFw/setting-report-host-on-mysql-mariadb-rds-instance-for-replication-from-external-host
false
"0Hi,This issue has been tracked internally with our service team. Currently, we do not have a way to set this parameter since RDS is managed service with some limitations.If you want to track issue further, you may also reach to us through support case or you may raise feature request from console:navigate lower left corner--> click Feedback --> click feature request which should reach to service team directly.CommentShareSUPPORT ENGINEERKevin_Zanswered a year ago"