Description
stringlengths
6
76.5k
Question
stringlengths
1
202
Link
stringlengths
53
449
Accepted
bool
2 classes
Answer
stringlengths
0
162k
"Hi everyone,We are trying to Run a Command (AWS-ApplyPatchBaseline) on an EC2 instance running Win 2012 R2.This EC2 instance has a Private IP (no public ip) and is using SQUID to connect to internet. We have defined Proxy at IE level and also SSM agent (in the registry) using powershell script provided by AWS.We ave provided the EC2SSMFullAccess IAM role to this instance.Running this command will always fail with this Output (see at end)What we like to know is this SSM command not supported when running behind a proxy server?ThanksPatch Summary forPatchGroup :BaselineId :SnapshotId :OwnerInformation :OperationType : ScanOperationStartTime : 0001-01-01T00:00:00.0000000ZOperationEndTime : 0001-01-01T00:00:00.0000000ZInstalledCount : -1InstalledOtherCount : -1FailedCount : -1MissingCount : -1NotApplicableCount : -1WIN-P5HSOSPN3J9 - PatchBaselineOperations Assessment Results - 2017-05-03T14:34:19.561Scan found no missing updates.----------ERROR-------failed to run commands: exit status 4294967295Invoke-PatchBaselineOperation : A WebException with status ConnectFailure wasthrown.At C:\ProgramData\Amazon\SSM\InstanceData\i-0dcd2ed49067b2cc8\document\orchestration\f596754b-b9a2-4cc2-982c-c209a387d895\awsrunPowerShellScript\0.awsrunPowerShellScript_script.ps1:155 char:13+ $response = Invoke-PatchBaselineOperation -Operation Scan -SnapshotId ''-Instan ...+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ CategoryInfo : OperationStopped: (Amazon.Patch.Ba...UpdateOperation:FindWindowsUpdateOperation) [Invoke-PatchBaselineOperation], AmazonServiceException+ FullyQualifiedErrorId : PatchBaselineOperations,Amazon.Patch.Baseline.Operations.PowerShellCmdlets.InvokePatchBaselineOperationEdited by: SMUnix on May 3, 2017 6:10 PMFollowComment"
AWS-ApplyPatchBaseline run command fails when running behind a proxy server
https://repost.aws/questions/QU0-z-3aDnQYem2Gnp2vnQ9Q/aws-applypatchbaseline-run-command-fails-when-running-behind-a-proxy-server
true
"0Accepted AnswerThis will be addressed in the future releases. Meanwhile this work around might help._Please configuring the IE settings under the SYSTEM context via PSExec __PS C:\Users\Administrator\Downloads\PSTools> .\psexec -i -s -d cmd __Then in the System User Context CMD prompt; __C:\Windows\system32>whoami __nt authority\system __C:\Windows\system32>inetcpl.cpl _Then configured the same settings that were stored in HKCU & re-ran the document.CommentSharerazdarbAWSanswered 6 years ago0Thanks for this info.CommentShareSMUnixanswered 6 years ago0Hi I am facing the similar issue Here.We are trying to use Patch Manager to do patching on an EC2 instance running Windows server 2016.This EC2 instance has a Private IP (no public ip) and is using SQUID to connect to internet.We have defined Proxy and also configuring the IE settings under the SYSTEM context via PSExec.However, the Windows patching will always fail with this Output (see at end)Any advise?++The command output displays a maximum of 2500 characters. You can view the complete command output in either Amazon S3 or CloudWatch logs, if you specify an S3 bucket or a CloudWatch logs group when you run the command.++++Patch Summary for++++PatchGroup :++++BaselineId :++++SnapshotId :++++OwnerInformation :++++OperationType : Scan++++OperationStartTime : 0001-01-01T00:00:00.0000000Z++++OperationEndTime : 0001-01-01T00:00:00.0000000Z++++InstalledCount : -1++++InstalledRejectedCount : 0++++InstalledOtherCount : -1++++FailedCount : -1++++MissingCount : -1++++NotApplicableCount : -1++++UnreportedNotApplicableCount : -1++++STB-MM-2FA - PatchBaselineOperations Assessment Results - 2019-04-30T12:45:30.802++++Scan found no missing updates.++++----------ERROR-------++++Invoke-PatchBaselineOperation : Instance Id i-0ca6ecd1648836185 doesn't match++++the credentials++++At C:\ProgramData\Amazon\SSM\InstanceData\i-0ca6ecd1648836185\document\orchestr++++ation\349aa1e6-fd35-4687-a8b2-78db99323015\PatchWindows_script.ps1:195 char:13+++++ $response = Invoke-PatchBaselineOperation -Operation Scan -SnapshotId ...+++++ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+++++ CategoryInfo : OperationStopped: (Amazon.Patch.Ba...UpdateOpera++++tion:FindWindowsUpdateOperation) [Invoke-PatchBaselineOperation], AmazonSi++++mpleSystemsManagementException+++++ FullyQualifiedErrorId : PatchBaselineOperations,Amazon.Patch.Baseline.Op++++erations.PowerShellCmdlets.InvokePatchBaselineOperation++++failed to run commands: exit status 4294967295++CommentShareuser9012345answered 4 years ago0Any idea when the fix will make it into the ssm agent? It's been 2 years since the original post about this and I just tried the suggested workaround but the ssm agent still seems to ignore the proxy settings and tries to communicate out directly to:ssm.us-east-1.amazonaws.comCommentShareryanfergusonanswered 4 years ago0Same issue - but we are not using any proxy server. The Invoke-PatchBaselineOperation fails for only one of the machines in the Patch Group with this error:Invoke-PatchBaselineOperation : The install operation did not completesuccessfully. Additional failure information from Windows Update:HResult: -2145124318 | Result Code: orcFailedAt C:\ProgramData\Amazon\SSM\InstanceData\i-03a9dad67ec4ced1a\document\orchestration\fbc6a0a2-c2ad-45ec-81ee-688711b881eb\PatchWindows_script.ps1:195 char:13$response = Invoke-PatchBaselineOperation -Operation Install -Snapsho ... ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ CategoryInfo : OperationStopped: (Amazon.Patch.Ba...UpdateOperation:InstallWindowsUpdateOperation) [Invoke-PatchBaselineOperation], ExceptionFullyQualifiedErrorId : Exception Level 1:Error Message: The install operation did not complete successfully. Additional failure information from Windows Update:HResult: -2145124318 | Result Code: orcFailedStack Trace: at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.InstallWindowsUpdateOperation.InstallUpdates(IEnumerable`1 filteredUpdates)at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.InstallWindowsUpdateOperation.InstallUpdates()at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.InstallWindowsUpdateOperation.DoWindowsUpdateOperation(),Amazon.Patch.Baseline.Operations.PowerShellCmdlets.InvokePatchBaselineOperationfailed to run commands: exit status 4294967295CommentShareSKulaianswered 4 years ago0I am using RunPatchbaseline for installing windows updates on the Windows server 2k12,2k16 and 2k19. It works fine with 2k12 and 2k19 however this fails in case of 2k16. Also, there are cases to be considered here. Below are my test cases.Server hosted in public subnet with outbound traffic enabled - SuccessServer hosted in public subnet with outbound traffic disabled - SuccessServer hosted in private subnet with outbound traffic enabled - FailI don't understand why it fails on windows server 2k16. Can anyone guide me in the right direction to get it resolved.CommentShareshaileshsutar88answered 3 years ago0I had a similar problem. Windows 2016 with no external address, accessing windows update via NAT. It turns out that windows firewall service must be enabled for windows update to download patches.Once I started the windows firewall service the server was able to download patches.I also had this message in the windows update log.2020/04/02 08:26:21.6793588 1128 3080 DownloadManager BITS job {3E75293B-FE35-4A1B-9877-F624F4A18DA6} hit a transient error, updateId = {034DE509-A373-470E-A1D7-2432D5399D70}.201 <NULL>, error = 0x800706D92020/04/02 08:26:21.6803449 1128 3080 DownloadManager Error 0x800706d9 occurred while downloading update; notifying dependent calls.Hope this helps you.CommentSharesimonhillanswered 3 years ago"
"AWS Lighsail says that every instance (virtual server) is free for the first three months and $3.50 thereafter for the most basic plan. Let's suppose an AWS user creates an instance in the Oregon region and deletes it BEFORE the end of the first three months. Let's also suppose that the AWS user creates a new instance in the same region right after deleting the first instance, will AWS charge the user or let him have a new three-free-month trial for the new instance?FollowComment"
AWS instance: free for first three months
https://repost.aws/questions/QUhwNWfEsPQeabyQtX5Rc4YQ/aws-instance-free-for-first-three-months
false
"0you can find the detailed information herehttps://aws.amazon.com/lightsail/pricing/as summaryThe AWS Lightsail free tier offering you mentioned is not exactly for the first three months. Instead, AWS offers a free tier for Lightsail with specific usage limits. The free tier provides 750 hours of usage per month for the first month of a new Lightsail account. The 750 hours can be spread across multiple instances within that month.To answer your question, if a user creates an instance and deletes it before the end of the first month, the free tier will still cover the new instance created afterward, as long as the combined usage of all instances does not exceed 750 hours within the first month.After the first month, the free tier no longer applies, and you will be charged the regular rate based on your chosen plan, such as $3.50 per month for the most basic plan.CommentShareEXPERTsdtslmnanswered 2 months agoEXPERTBrettski-AWSreviewed 2 months ago"
"My custom resource is configured as follows:Resources: SecretTagValCreation: Type: Custom::SecretTagValCreation Properties: ServiceToken: arn:aws:lambda:us-east-2:ACCOUNT_NUMBER:function:ReturnSecretToCFNForEC2 Region: !Ref "AWS::Region" Env: !Ref 'EnvTagValue' ProjID: !Ref 'ProjIDTagValue' Dept: !Ref 'DepTagValue' Owner: !Ref 'OwnerTagValue' StackID: !Ref 'AWS::StackId' StackName: !Ref 'AWS::StackName'I have the AWS CloudFormation stack created in one account, and an AWS Lambda function created in a different account. My code works, and the Lambda function can be invoked by the CloudFormation role.However, I can't delete my stacks because the custom resource doesn't get deleted during stack creation. The status of the stack is "DELETE_FAILED" or "ROLLBACK_FAILED" (if there is an error with stack). If I retain the custom resource and try deleting the stack again, then the stack gets deleted. However, this is not ideal. How do I delete the resources properly?FollowComment"
Why does my stack deletion fail because of an error that occurs when deleting a custom resource?
https://repost.aws/questions/QUQf2g5x0nReGW1PEVZC3pIg/why-does-my-stack-deletion-fail-because-of-an-error-that-occurs-when-deleting-a-custom-resource
true
0Accepted AnswerThis issue occurs under one or more of the following conditions:You don't send a response.An issue occurs during your handler's cleanup.Check if you've implemented the delete event in your Custom Resource (event['RequestType'] == 'Delete').CommentShareEXPERTRaphaelanswered 3 years agoEXPERTFaraz_AWSreviewed 2 years ago
"Some csv data sources that I've updated into s3 are showing an object url (It has ...s3.us-west-1.amazonaws.com... in the url).When trying to paste the full path in the browser, I get this error:<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>Q7W20ZWNAHAP2VXM</RequestId><HostId>sc6hPDs7qlwkZ8wozFhWenhx8tFx1VCz7nktdM87ieQF7yzy+2C2skVqDzrryiQps//BA+HHy2A=</HostId></Error>FollowComment"
AccessDenied when trying to paste the full path in the browser
https://repost.aws/questions/QURuFczm38TNOSKnySSpCxQA/accessdenied-when-trying-to-paste-the-full-path-in-the-browser
false
"1Hi.If you don't have access to the object, access to the S3 object will be denied.You have several options.Sign in to the AWS Management Console and download the S3 objectIssue a presigned URLEnable anonymous accessPlease refer to the following article.https://aws.amazon.com/premiumsupport/knowledge-center/s3-access-denied-error/?nc1=h_lsCommentShareEXPERTiwasaanswered a year ago"
"Get bucket Rest Api gives the response of the bucket of all regions. Even if I provide Region as Query or header, Tried all possible ways.At the last, I'm Filtering it within For loop by checking the bucket location. But it consuming A lot of Time.I just want buckets Within Single Region.Is That any alternative way to get buckets only within that single region with Rest API?FollowComment"
How to Get all bucket within Single Region Through REST API
https://repost.aws/questions/QU26O_zdOkRDGudrPLupPDeg/how-to-get-all-bucket-within-single-region-through-rest-api
true
"1Accepted AnswerAt the time being (January 2023), listing buckets from a specific region is only possible by calling get-bucket-location for each bucket, as you are already doing.CommentShareEXPERTalatechanswered 4 months ago"
Data migration tasks fails even after making all the changes in mongo db.Have created a role and user in the database I am using (non-root user account).But the changestream seems to be not working correctly.Encountered an error while initializing change stream: 'not authorized on admin to execute command .....The endpoint connection is successful.Please let me know what I am missing.FollowComment
AWS DMS CDC task with Mongodb as source fails
https://repost.aws/questions/QUlLcPEho_RByiPbouznqKug/aws-dms-cdc-task-with-mongodb-as-source-fails
false
"1this could be due to missing permission to initialize change stream.More specifically this is for source mongodb 4.x onwardsplease refer to section which tells how to create a role with that permission and also how to assign the same to dms user.https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html#:~:text=Permissions%20needed%20when%20using%20MongoDB%20as%20a%20source%20for%20AWS%20DMSCommentShareAWS-User-8174525-subhashanswered a year agorePost-User-7605264 a year agoHi team,Why is it a admin DB? Seems I never specified db before, how to modify the db name?Encountered an error while initializing change stream: 'not authorized on admin to execute command {....$db: "admin".....' [1020401] (change_streams_capture.c:356)Share"
"Hi there,I am looking for a way to increase the limit on IPv6s per Instance. Within the limit increase screens I cannot find any option.Does anyone know how to do this and if this is even possible?Best RegardsChrisFollowComment"
Limit increase for IPv6 per instance possible?
https://repost.aws/questions/QUcrrB5HzdTr2ST2KJyr0eQw/limit-increase-for-ipv6-per-instance-possible
false
"0anyone has an idea?CommentSharechris4242answered 4 years ago0Hello,IP addresses are per network interface per instance type and cannot be changed. Therefore, if you require more IPv6 addresses, consider moving to a different Instance type. The following document lists the maximum number of network interfaces per instance type, and the maximum number of private IPv4 addresses and IPv6 addresses per network interface:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENIMore information regarding changing instance types can be found in the below documentation:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-resize.htmlRegards,LoiyAWSCommentShareAWS-User-5964800answered 4 years ago0Thank you for letting me know.So I get a prefix with millions of addresses, but can use 2...If you look at the link (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI), it seems the IPv4 was just copied over to IPv6. Booking a different instance type would mean to get the largest available for just 50 IPv6. So basically rent the 747 to transport a peanut...If you have any internal channels to give product improvements, please put in a hint, that IPv6 works different than IPv4. Hence the current limits are quite ridiculous (Especially as you get a /56 prefix assigned) and just produce IPv6 NAT and frustration with customers.Best RegardsChrisCommentSharechris4242answered 4 years ago"
"I was watching one of AWS' guides on Cognito.At a point in the video (15:08) - the guy copied his id_token and uses it to authenticate his API call (rather than his access_token as you might expect).Initially I though this was unintentional and that it was just happy coincidence that it worked. But when I went about this the "right" way, and sent the access_token (or rather, let Postman's OAuth 2.0 authentication pass it for me), I got hit with the following error from the Cognito SDK:Invalid login token. Missing a required claim: audI checked and, sure enough, no aud claim in the access_token, but there was one in the id_token, which was obviously the reason the guide was using the id_token. But why?? Is this intentional misuse?Even according to AWS' own documentation:The purpose of the access token is to authorize API operationsEdit:A week or so after posting this, YouTube (being YouTube) decided that I should watch this: ID Tokens vs Access Tokens - Do you know the difference?! (for the TLDR just skip to 6:57).There's nothing new here for most people familiar with OAuth/OIDC. But directly contradicts Cognito's use. I don't want this edit to repurpose this question as pure criticism - that wasn't the original intent. If anybody has knowledge as to why Cognito devs have (or may have) taken this approach - please post!FollowComment"
Why do Cognito access tokens not have an audience claim?
https://repost.aws/questions/QU4wv0qKIMSk2O64Wk5P4jdg/why-do-cognito-access-tokens-not-have-an-audience-claim
false
"1Hello,The Identity Pool integrates with User Pool where the User Pool serves as the authentication provider. One of the benefits of this integration is that the authenticated user's groups and role association in the User Pool can be used to grant fine-grained access control in the Identity Pool. For example, you can have a rule in your Identity Pool that grants read-only permissions to the user if they belong to a read-only group in the User Pool.The access token in the OAuth framework was not intended to contain user information like group association and attributes. You typically will use the access token to obtain the Id Token which will contain the user information. Since the fine-grained access control could rely on the user information, you will need to use the Id token to provide the user's information to the Identity Pool which you can then leverage to create rules for your fine-grained access control.I hope this provides clarity to why Id token is used in this case.For more information, please refer to the Role-based access control documentation - https://docs.aws.amazon.com/cognito/latest/developerguide/role-based-access-control.htmlCommentShareAWS-Woleanswered 9 months agoANeeson 9 months agoSo you're saying that the answer is "yes", this is intentional misuse? I.e. They're they need the claims, but are trying to save a trip to the userinfo endpoint to get them? Some systems use JWT access_tokens to skip this step, but they're still access tokens. I had assumed that this is what Cognito was trying to do as their access_token was a JWT.So assuming they are intentionally misusing id_tokens, why is their access_token a JWT? What are you supposed to do with it?Share"
"Hello. My billing shows I am being billed for a SageMaker instance (type = ml.c5.9xlarge-RStudioServerPro; Region = US East (N. Virginia)), but my EC2 Dashboard shows no instances running, either in that region or globally, and I have nothing running in my RStudio app. I need to find and stop that instance. Thank you.FollowComment"
Cannot Find Running Instance
https://repost.aws/questions/QU0PBMqWVHRzyI7p-ZTfV2Jg/cannot-find-running-instance
false
"0Hello learn2skills:Thank you very much for your answer. I have followed the cleanup instructions earlier, and have no Notebooks or Endpoints. I will work to configure CloudWatch to see if that helps, but the billing shows the instance type and region. Will CloudWatch provide more information? Do you have additional suggestions?Thank you.CommentSharerePost-User-6397876answered 9 months ago0HI,ml EC2 instances do not appear in the EC2 console. You can find their metrics in Cloudwatch though, and create dashboards to monitor what you need:They don't appear in the EC2 UI and API as they are being managed by the SageMaker control planeNotebook instance: you have two options if you do not want to keep the notebook instance running. If you would like to save it for later, you can stop rather than deleting it.To stop a notebook instance: click the Notebook instances link in the left pane of the SageMaker console home page. Next, click the Stop link under the ‘Actions’ column to the left of your notebook instance’s name. After the notebook instance is stopped, you can start it again by clicking the Start link. Keep in mind that if you stop rather than delete it, you will be charged for the storage associated with it.To delete a notebook instance: first stop it per the instruction above. Next, click the radio button next to your notebook instance, then select **Delete **from the Actions drop down menu.Endpoints: these are the clusters of one or more instances serving inferences from your models. If you did not delete them from within a notebook, you can delete them via the SageMaker console. To do so:Click the Endpoints link in the left panel.Then, for each endpoint, click the radio button next to it, then select Delete from the Actions drop down menu.You can follow a similar procedure to delete the related Models and Endpoint configurations.Clean upOpen the SageMaker console at https://console.aws.amazon.com/sagemaker/ and delete the notebook instance. Stop the instance before deleting it.Refer-https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-ex-cleanup.htmlhttps://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-delete-resources.htmlIf the Answer is helpful, please click Accept Answer & UPVOTE, this can be beneficial to other community members.CommentSharelearn2skillsanswered 9 months ago0@rePost-User-6397876You can use the following script to find idle instances. You can modify the script to stop the instance if idle for more than 5 minutes or have a cron job to stop the instance.import boto3last_modified_threshold = 5 * 60sm_client = boto3.client('sagemaker')response = sm_client.list_notebook_instances()for item in response['NotebookInstances']: last_modified_seconds = item['LastModifiedTime'].timestamp() last_modified_minutes = last_modified_seconds/60 print(last_modified_minutes) if last_modified_minutes &gt; last_modified_threshold: print('Notebook {0} has been idle for more than {1} minutes'.format(item['NotebookInstanceName'], last_modified_threshold/60))CommentSharelearn2skillsanswered 9 months ago"
"Hello,We have setup a site to site VPN from our main office to and it's connected to our transit gateway in AWS. I'm able to ping our main VPC instances over the VPN tunnel. We have other accounts/VPC's that is connect to the transmit gateway but i am unable to ping those instances over the VPN. I have added the new network in our Cisco VPN profile but as soon as i add it, i lose connection to the main VPC and the new VPC instance starts pinging. It seems as if i'm only allowed to ping one VPC at a time. I talked with Cisco and they said it's because my VPN is policy based and not route based. Can anyone tell me how to create a route based VPN tunnel through the transit gateway or tell me if i'm missing a step?ThanksFollowComment"
AWS Transit Gateway with Cisco ASA Routing Issues
https://repost.aws/questions/QUBv5iBzo8QueaNaM2e1Y7Fg/aws-transit-gateway-with-cisco-asa-routing-issues
false
"0Hi,an AWS site-to-site VPN tunnel is always route-based. You should configure the Cisco ASA end of the connection as route-based (https://www.cisco.com/c/en/us/support/docs/security-vpn/ipsec-negotiation-ike-protocols/214230-configure-policy-based-and-route-based-v.html). In AWS, you should set both the "Local IPv4 Network Cidr" and "Remote IPv4 Network Cidr" settings to 0.0.0.0/0.The reason why only one VPC is reachable at a time is that one AWS site-to-site VPN connection only permits one security association in each direction to be active at one time. When you configure a policy-based tunnel on the ASA with several IP networks configured in the encryption domain, the ASA will establish a separate security association for each combination of IP networks (traffic selectors) communicating over the tunnel.For example, if you have the CIDR blocks 10.12.0.0/16 and 10.45.0.0/16 configured for your VPCs, and the site-to-site VPN connects them to a a single on-premises CIDR block 10.240.0.0/16, then traffic from on premises to the first VPC will cause a security association to be established from 10.240.0.0/16 to 10.12.0.0/16. When traffic is attempted to the other VPC, the first pair of SAs will be torn down and new ones established between 10.240.0.0/16 and 10.45.0.0/16. That's the phenomenon you are seeing.When you configure a route-based VPN on the ASA, it will only establish one security association in each direction, with 0.0.0.0/0 on both sides of the tunnel. Regardless of how many VPCs and on-premises networks you have, they will all be reachable without having to establish additional SAs.Note that the cryptographic settings in the examples in Cisco's article are seriously weak. AWS site-to-site VPN supports the most secure settings recognised by the ASA.CommentShareLeoMKanswered 2 years ago0Thanks for the response. So we torn down the static VPN and we are using BGP or Dynamic. When you setup a BGP tunnel, it keeps 2 tunnels active. What we are seeing now, is that traffic is going through one tunnel and coming back through the other which is resulting in sometimes not being able to ping some devices in some VPCs. Sometimes we can ping the device, sometimes we can't. Cisco seems to think that it's on AWS side with traffic trying to come back through the other tunnel. Have you seen this scenario?CommentSharemjpitanswered 2 years ago"
"I'm using the AWS SDK for PHP and when I compose and send an email, they always arrive in plain text format rather than the HTML format provided. I provide both the HTML and Text attributes for the Body. The documentation indicates that you can provide one or the other or both, but leaving off the Text attribute results in a validation error.I've sent emails to two different recipient systems, and both display the plain text content.What am I missing?FollowComment"
Using SES sendEmail to deliver HTML formatted emails
https://repost.aws/questions/QUuvmbfVPgQGCzmnw4_j0Gww/using-ses-sendemail-to-deliver-html-formatted-emails
false
"0Hi,The following document shows how to use the AWS SDK for PHP to send an email through Amazon SES:https://docs.aws.amazon.com/ses/latest/dg/send-an-email-using-sdk-programmatically.html#send-an-email-using-sdk-programmatically-examplesThe code in this tutorial was tested using PHP 7.2.7. This code sample sends the email in both the formats – HTML and plain text.When you use both formats to send the same content in a single message, the recipient's email client decides which to display, based upon its capabilities as discussed here:https://docs.aws.amazon.com/ses/latest/dg/send-email-concepts-email-format.html#send-email-concepts-email-format-bodyYou may send the email in only HTML format by modifying the Body attribute as follows:'Body' => [ 'Html' => [ 'Charset' => $char_set, 'Data' => $html_body, ], ],Also to view the email in HTML format, please ensure that you use HTML-enabled email clients.CommentShareSUPPORT ENGINEERAWS-User-0628836answered 4 months ago"
"Hi,I have setup AWS workspace with Windows. Now when I am using it, it just keeps freezing in between and when every time I have to disconnect and reconnect, then it comes back to life.Any help will be really good.FollowComment"
AWS Workspace client on Mac keeps freezing
https://repost.aws/questions/QU6TjI8ZX4TmWtrRg96-Zfmg/aws-workspace-client-on-mac-keeps-freezing
false
"0Hello. Please see if any of these helps:Update the WorkSpaces macOS client application to a newer versionThe WorkSpaces client application won't run on my MacIf you want WorkSpaces to keep you logged in until you quit or your login period expires, select the Keep me logged in check boxOther references to Troubleshoot WorkSpaces client issuesCommentShareAWS-User-2870192answered 5 months ago"
"Hello.One of our customers has an AWS solution with Palo Alto firewalls. Sitting in front of those is a load balancer and in the trust zone a web server. We have been asked to enable inbound ssl decryption on the Palo Alto's following a security issue earlier this year. We have created a web server cert and private key pair, imported to the palo's and created decryption profile and rules but the firewalls will not decrypt due to 'private key not matching public key'. We are wondering if this is due to the cert on the client (essentially the load balancer) being different. Traditionally the client would have the same cert as the server but in this case the client has an amazon cert. How do we get around this, what is the best way to set up, create a cert on the load balancer and use that on the client and web server? thanksFollowCommentCarlosGP-AWS 4 months agoHi there,Can you confirm if this is the flow of the packets:Clients->IGW->ALB->FW->Web-Serverif thats the case, the ALB would be doing the TLS termination and sending the public cert to the client.From there, the ALB would create a new HTTP or HTTPS connection towards the Web-server. If its HTTPS, the ALB just accepts any certificate as it doesn't check for CA.So now you're trying to terminate this TLS connection on the Palo Alto Firewall? or you're trying to transparently inspect the incoming packets without terminating the connection?If you can elaborate on the part between the ALB->FW->Web-Server so I can try and help point the issue.Share"
Help with AWS/Palo Alto firewalls and SSL Decryption
https://repost.aws/questions/QUswoBjteSSr2qVrwoh_mURw/help-with-aws-palo-alto-firewalls-and-ssl-decryption
false
"I have raised a Stackoverflow question as well regarding this, you can find more info here - https://stackoverflow.com/questions/57492795/amazon-dax-client-not-working-as-per-ttl-set-on-the-itemI am not sure what I am doing wrong but I am not seeing consistent reads from DAX client in accordance with the TTL set on the item. Even after the TTL is expired, I still see the item being returned from the cache. Can you please let me know if there is anything I am missing here. The items exist in Dynamodb as well for some time even after the TTL time is passed.FollowComment"
Amazon dax client not working as per TTL set on the item
https://repost.aws/questions/QUVfR2AnKHQC6i4h3HAnzOmw/amazon-dax-client-not-working-as-per-ttl-set-on-the-item
false
"0Hi Mahendranaga,TTL deletion may not always happen at expiry time. You can learn more about how TTL works with DynamoDB and DAX in our docs here:https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.htmlHere's the relevant excerpt:DynamoDB typically deletes expired items within 48 hours of expiration. The exact duration within which an item truly gets deleted after expiration is specific to the nature of the workload and the size of the table. Items that have expired and have not been deleted still appear in reads, queries, and scans. These items can still be updated, and successful updates to change or remove the expiration attribute are honored.Edited by: ArturoAtAWS on Aug 15, 2019 1:27 PMCommentShareAWS-User-6425534answered 4 years ago"
"Hi there, I'm new in that thread, and doing my first steps.I'm getting next error when trying to run test (for android):Failed to install com.bla-bla.bla-bla INSTALL_FAILED_NO_MATCHING_ABIS: Failed to extract native libraries, res=-113Here's the run: https://us-west-2.console.aws.amazon.com/devicefarm/home?region=us-east-1#/projects/e9705ee5-4015-478f-b9f1-c62f8b73ef49/runs/4e2589ec-d023-4c8e-ae4f-5b1c13b112cdWhat am I doing wrong?apk file of my app I've got with the help of EZ explorerEdited by: Sunshineauto on Sep 3, 2019 3:49 AMFollowComment"
"Device farm shows me "Failed to extract native libraries, res=-113" error"
https://repost.aws/questions/QUaAFnBo-GRUGArYWUz5LYpw/device-farm-shows-me-failed-to-extract-native-libraries-res-113-error
false
"0Hello,This error is likely caused due to the app you have compiled not being compatible for the device architecture(s) you have compiled it for (ie: armv7, x86, etc). To fix this issue, you need to recompile the app with support for multiple architectures enabled. In android studio, this can be done through the Advanced optionsCommentShareTobe-AWSanswered 4 years ago0Iy turned out that problem was hiding in the way i uploaded apk. I was getting it straight from emulator. But when I got this with AS it workedCommentShareSunshineautoanswered 4 years ago"
"I've subscribed to RDS events and I'm constantly getting the following event from one of my databases:Storage size <size> is approaching the maximum storage threshold <size>. Increase the maximum storage threshold.Observations:In the past the instance has been full to 90% of disk capacityCurrently Free Storage Space metric indicates that 75% of the disk is free.Couple of days have passed since we freed disk space on the instance.Data growth is very slowFollowCommentkabilesh PR 2 months agoActually we are getting disk threshold alert not the disk usage alert.Hope you have storage autos along enabled with your rds,Rds increments by 10% of the allocated storage, your storage auto scaling threshold is now less the 10%To stop the alert increase your disk auto scaling thresholdShare"
RDS Storage size is approaching the maximum storage threshold.
https://repost.aws/questions/QUAv-qT45lR1GuZfJQp3eV9Q/rds-storage-size-is-approaching-the-maximum-storage-threshold
false
"1You may want to consider using RDS storage auto scaling.CommentShareEXPERTkentradanswered 2 months ago0It would be good to increase it just in case.If a disk becomes unavailable, applications using RDS will be affected.CommentShareEXPERTRiku_Kobayashianswered 2 months ago0As documented, to stop the alert, you will need to set the maximum storage threshold to at least 26% more than the allocated storage.For example, if you have DB instance with 1000 GiB of allocated storage, then set the maximum storage threshold to at least 1100 GiB. If you don't, you get an error such as Invalid max storage size for engine_name. However, we recommend that you set the maximum storage threshold to at least 1260 GiB to avoid the event notification.CommentShareadrian-AWSanswered a month ago"
"bucket_name = ssm.get_parameter(Name='...', WithDecryption=False)['Parameter']['Value'] raise error_class(parsed_response, operation_name) botocore.errorfactory.ParameterNotFound: An error occurred (ParameterNotFound) when calling the GetParameter operation:FollowComment"
"After creating "ssm = boto3.client('ssm', 'global')" the following error has been displayed."
https://repost.aws/questions/QUzH1a7cNbT6KuFETKbzjRSw/after-creating-ssm-boto3-client-ssm-global-the-following-error-has-been-displayed
false
"HiI am trying to send a custom application log from an AppStream fleet to Cloudwatch. I decided to give it a try like on a regular Windows EC2.First thing: There is already a config file $Env:ProgramData\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent.json cofigure for AppStream itself. In a first run I tried editing this file and adding my own log file block.However strangely when I generate the AS Image and start a fleet, the config file no longer holds my edits (it seems to have reverted to the default config file)I then tried leaving the default config file alone and just add a custom config file under $Env:ProgramData\Amazon\AmazonCloudWatchAgent\Configs directory. I know that at the end these will get consolidated in the toml file and it works well with EC2.In this case again it seems as if everything is reset when I generate the AS Image.I did find this blog post: https://aws.amazon.com/blogs/desktop-and-application-streaming/creating-custom-logging-and-amazon-cloudwatch-alerting-in-amazon-appstream-2-0/ but it look overly complicated for such simple thing I'm trying to achieve meaning pushing logs from a log file to cloudwatchAny ideas how I can achieve this and why AS removes all customization I can make to the cloud watch config ?FollowComment"
AppStream send custom logs to Cloudwatch using Cloudwatch agent
https://repost.aws/questions/QUz1yIoB9zSzWvTRE7RV_4NQ/appstream-send-custom-logs-to-cloudwatch-using-cloudwatch-agent
false
Try to add the Redshift connection on SageMaker Canvas to import the dataThe cluster identify: redshift-cluster-1database name: devdatabase user: awsuserunload IAM Role: my-reshift-roleconnection name: redshifttype: IAMmy-reshift-role trust-relationship is trust the "redshift.amazonaws.com" and "sagemaker.amazonaws.com"Expectation: create connection successfullyActually result:RedshiftCreateConnectionErrorUnable to validate connection. An error occurred when trying to list schema from RedshiftFollowComment
SageMaker Canvas connect Redshift failed
https://repost.aws/questions/QUJvjatAJaQv-Ist96WT1IIw/sagemaker-canvas-connect-redshift-failed
true
"0Accepted AnswerThe sagemaker canvas using sagemaker domain user, so need add the Redshift permission to the IAM Role attached to domain user. After add the permission, the connection can be setupCommentShareAWS-User-8556114answered a year agoEXPERTFabrizio@AWSreviewed a year ago"
"We migrated one of our larger databases to Aurora on Monday, and since then we’ve been getting intermittent errors in our bug tracker related to missing tables, particularly during times of high load. In all cases, the tables in question are regularly swapped out with staging tables using RENAME TABLE x TO old_x, new_x TO x;In “real” MySQL this operation is atomic, however the intermittent query errors reporting non-existent tables suggest that this is not the case in Aurora MySQL. To clarify, the failures occur when other parts of the app logic try to query the table being swapped out. For example, following the above example, SELECT y FROM x will sometimes fail with Table {db_name}.x doesn’t exist. The rename operation itself completes without issue.Can anyone confirm this one way or the other? I’m struggling to find any supporting documentation or even anyone reporting the same issue. For context, we are using Aurora MySQL 2.10.2.FollowComment"
Aurora MySQL table rename not atomic?
https://repost.aws/questions/QUCuC84unATA6IIlAdplroBw/aurora-mysql-table-rename-not-atomic
false
"0Hi mpbarlow,I understand from you are getting an intermittent error related to missing tables when running queries especially in times of high load. Please correct me if I misunderstood.As you may know that MySQL(before 8.0) uses two Data Dictionaries (MySQL's own dictionary and InnoDB's dictionary), this issue might be due to both MySQL and InnoDB dictionaries not being in a consistent state (with each other) for a SQL statement to be able to access a table.You may disable the foreign_key_checks parameter. As the MySQL documentation[1] clearly warns of the possibility of dictionary inconsistency:"With foreign_key_checks=0, dropping an index required by a foreign key constraint places the table in an inconsistent state and causes the foreign key check that occurs at table load to fail."The consequence of DDLs run while with foreign_key_checks disabled is that because one of the tables in the FK relationship now has different structure than the other, MySQL can't handle the inconsistency and both tables will go missing.Reference:[1] https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_foreign_key_checksCommentShareWinnieanswered a year ago"
"I can't migrate from mySQL 5.X.X to 8.X.X.The PrePatchCompatibility.log shows the following:Schema inconsistencies resulting from file removal or corruptionFollowing tables show signs that either table datadir directory or frm file was removed/corrupted. Please check server logs, examine datadir to detect the issue and fix it before upgradescraper@002dapi.requests_pages - present in INFORMATION_SCHEMA's INNODB_SYS_TABLES table but missing from TABLES tableHowever when I look at the table information_schema.TABLES I see:TABLE_CATALOG | TABLE_SCHEMA |TABLE_NAME | TABLE_TYPE | ...def | scraper-api |requests_pages | BASE TABLE | ...The difference being in the dash of "scraper-api", but other tables with a dash in the name are well recognized.How can I fix it?Edited by: Mapi33 on May 23, 2019 7:32 AMEdited by: Mapi33 on May 23, 2019 7:33 AMFollowComment"
Migration error: "Schema inconsistencies resulting from file removal..."
https://repost.aws/questions/QUEKNkEJTGQFWCgnZGzGVbnw/migration-error-schema-inconsistencies-resulting-from-file-removal
false
0I've resolved it by changing the DB name and avoid dash character in it.Note that it was also advised to convert some columns to utf8md4.In fact it's not just an advice as the upgrade will not work without doing soCommentShareMapi33answered 4 years ago
"I am wondering if it is possible to determine the header key order in the request then use this value in rules?For example the header object would contain the followingHeadHeadhttpRequest.headers.0.nameHosthttpRequest.headers.0.valueapi.test.comhttpRequest.headers.1.nameuser-agenthttpRequest.headers.1.valueMozilla/5.0 (Linux; Android 10; SM-A217F) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Mobile Safari/537.36I want to check httpRequest.headers.1.name to see if this was user-agentFollowComment"
AWS WAFv2 determine header order
https://repost.aws/questions/QU7iqXil9nQAOoMYkiT6s9mw/aws-wafv2-determine-header-order
false
"0Hello HieuVu,Well, the order your seeing is a interpret format of the request for log view (ie., For Reading purpose and Filtering purpose in CW or Athena).Thus, at any point of time WAF only sees Key:Value (Ie., User-Agent:Mozilla/5.0...) header components NOT the order of the HTTP components.https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-fields.htmlRegards,CKCommentShareChethan SEanswered 10 months agoHieuVu 10 months agoHi Chethan,I understand that it is an interpret format, my questions is, if it is possible to determine the order that headers are in the headers, so either see what the second header is or determining what order did user-agent get sent in.Share"
"I just tried out Athena Engine version 3 with my existing workload (that works under engine version 2), and I get a number of errors like this one:NOT_SUPPORTED: Casting a Timestamp with Time Zone to Timestamp is not supported. You may need to manually clean the data at location 's3://[REDACTED]/tables/[REDACTED]' before retrying. Athena will not delete data in your account.Looking at the Athena Engine version 3 documentation, there is this section documenting this limitation:Casting from TimestampTZ to Timestamp is not supportedError Message: Casting a Timestamp with Time Zone to Timestamp is not supported.Suggested Solution: Any explicit or implicit cast from TimestampTZ to Timestamp throws the exception. If possible, remove the cast and use a different data type.And indeed the queries where I get this error are doing such a cast: CAST(from_iso8601_timestamp(regexp_extract("$path", '(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})Z'))AS TIMESTAMP)But my problem is that the suggestion to remove the CAST function doesn't work for me either, because my queries are CREATE TABLE AS SELECT statements, so when run something like this:CREATE TABLE table_name AS SELECT from_iso8601_timestamp(regexp_extract("$path", '(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})Z'))FROM source_table_nameI get this error:NOT_SUPPORTED: Unsupported Hive type: timestamp(3) with time zone. You may need to manually clean the data at location 's3://[REDACTED]/tables/[REDACTED]' before retrying. Athena will not delete data in your account.And indeed, the latter error is the reason I ever had the CAST function in these statements.Doing just the SELECT query (instead of `CREATE TABLE AS SELECT) works in the latter case... but I really do need to create the table.This becomes an even bigger problem when we look at the Trino documentation for date/time functions and notice that it has numerous function that return timestamp with time zone, and that it's very easy to run into this problem, like this CTAS statement illustrates:CREATE TABLE default.test WITH ( format = 'Parquet') ASSELECT -- All of these functions return TIMESTAMP WITH TIME ZONE: current_timestamp AS a, current_timestamp(6) AS b, from_iso8601_timestamp('2021-01-02T03:05:05') AS c, from_iso8601_timestamp_nanos('2021-01-02T03:05:05.123456') AS d, from_unixtime(0, 'UTC') AS e, now() AS f, parse_datetime('2022-04-05 15:55', 'yyyy-MM-DD HH:mm') AS g;Unless Athena Engine version 3 is patched to permit such casts, an easy, robust workaround needs to be documented.FollowComment"
"CREATE TABLE AS SELECT statement fails because function returns timestamp with timezone, but Athena Engine version 3 doesn't allow casting from TimestampTZ to Timestamp either"
https://repost.aws/questions/QUeHuk6tyDRX-IAQcSyM4c4Q/create-table-as-select-statement-fails-because-function-returns-timestamp-with-timezone-but-athena-engine-version-3-doesn-t-allow-casting-from-timestamptz-to-timestamp-either
false
"Hi,Looking for a solution where i can avoid Cloudwatch to get Memory Matrix of EC2FollowComment"
Options to get Ec2 memory matrix other then CloudWatch
https://repost.aws/questions/QUO5rZcGjwSLSce9SbiQhHjA/options-to-get-ec2-memory-matrix-other-then-cloudwatch
false
"0In AWS console, unfortunately only using cloudwatch agent we can collect and monitor ec2 memory metrics. Other than this you can montior memory metrics within the ec2 server itself using resource monitoring tools such as ‘atop’.CommentShareSree_Canswered 7 months ago0Not sure what the destinations would be and your goals for monitoring. You have options either on marketplace/third party or custom code based on your choice to gather the matrix and push to destination systems. You may also have options to for historical resource usage for later analysis but as I mentioned depends on your monitoring goals.CommentShareEXPERTAWS-User-Nitinanswered 7 months ago"
"The Resource Groups Tagging API only retrieves resources that are or previously were tagged. Is the Tag Editor the same, or can it also retrieve resources that were never tagged?FollowComment"
Does Tag Editor only retrieve resources that are or previously were tagged or also resources that were never tagged?
https://repost.aws/questions/QU3H0TZbcQSmiFZsuNO2LkmQ/does-tag-editor-only-retrieve-resources-that-are-or-previously-were-tagged-or-also-resources-that-were-never-tagged
false
"0Hi thereFrom the note I understand you want to know if Tag Editor only retrieves resources that are or previously were or never tagged.With Tag Editor, you build a query to find resources in one or more AWS Regions that are available for tagging. You can choose up to 20 individual resource types, or build a query on All resource types. Your query can include resources that already have tags, or resources that have no tags.I hope this helps.References:https://docs.aws.amazon.com/ARG/latest/userguide/find-resources-to-tag.htmlCommentShareNonkululekoanswered a year ago"
"Hi, I'm trying to create an API Gateway authorizer via CloudFormation, and am getting "Internal Failure" when adding the API Autorizer shown below on deploying. Here's the segment: ApiAuthorizer: Type: AWS::ApiGatewayV2::Authorizer Properties: Name: MyCustomAuthorizer # "Api" is my CloudFormation API which gets created ok... ApiId: $Ref Api AuthorizerType: REQUEST # AuthorizerFunctionARN is a parameter set to the Lambda function's ARN AuthorizerUri: 'Fn::Sub': >- arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${AuthorizerFunctionARN}/invocation # I've tried with and without AuthorizerCredentialsArn AuthorizerCredentialsArn: !Sub "arn:aws:iam::${AWS::AccountId}:role/APIGatewayLambdaInvokeRole" IdentitySource: - route.request.header.Auth ApiAuthorizerPermission: Type: AWS::Lambda::Permission Properties: Action: lambda:InvokeFunction FunctionName: !Ref AuthorizerFunctionARN Principal: apigateway.amazonaws.com SourceArn: !Sub "arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${Api}/authorizers/${ApiAuthorizer}" I found this on GitHub and this on StackOverflow but not making any headway. Is there a way to get more detailed error info from CloudFormation than "Internal Failure"? Is there some permissions I need to set up? TIAFollowComment"
ApiGatewayV2::Authorizer - "Internal Failure"
https://repost.aws/questions/QUOSlAJqiTQhqG8QmfJd9Q8A/apigatewayv2-authorizer-internal-failure
false
"0In general, CloudTrail event history helps to get more information on any error occurred in CloudFormation deployment.This link might be helpful in reviewing Event history https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events-console.htmlIn this case, you can filter using Event Name (CreateAuthorizer) OR Event source (apigateway.amazonaws.com) or Username (IAM Role used by CloudFormation to deploy resources).Also, in order to isolate the issue, you can try creating an authorizer with similar configuration on API Gateway console and see if that gets created successfully. Then it might be an issue with CloudFormation.CommentShareSUPPORT ENGINEERSumit_K-Ranswered 4 months ago"
"I'm a beginner developer working on a deploying a full-stack project to lightsail container services and running into issues with deployment.Everything works fine locally on my docker desktop, my frontend is a react app in a nginx web server exposed on port 80. My backend is a nodejs server exposed on port 5000.The frontend image expects an env variable with the server url to make api requests (I'm using runtime env variables by the way, I got that working fine).The backend image expects an env variable with the frontend url to allow cors.Locally, I'm passing in the env variables as: FRONTEND_URL=http://localhost:8080 and BACKEND_URL=http://localhost:5000When I deploy to lightsail and I pass in the env variables, I'm following the steps listed in the documentation here for communication between containers within the same container service.https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-container-services-deployments#communication-between-containersI've tried passing in as the FRONTEND_URL env variable: service://localhost:80, service://localhost, http://localhost:80, http://localhostI've tried passing in as the BACKEND_URL env variable: service://localhost:5000, http://localhost:5000, I also tried using the private domain of my lightsail container service (e.g. container-react1.service.local:5000) .I'm selecting the frontend container for the public endpoint.I'm able to deploy the app and access the app as normal in all cases, but when I try to access the api route I get an error: net::ERR_CONNECTION_REFUSED.Any thoughts on getting around this?FollowCommentGabriel - AWS 5 days agoThis post appears related to your issue https://repost.aws/questions/QUoYzGuzJBSE-0iWuBeliLaw/communication-between-lightsail-containersShare"
Communicating between containers within the same container service in AWS LightSail
https://repost.aws/questions/QUjS8TM4QkRW2ScK9ppiaeQQ/communicating-between-containers-within-the-same-container-service-in-aws-lightsail
false
"after the latest changes to SES dashboard, my outbound emails are being sent without encryption : https://i.imgur.com/Vu0PdQE.pngDKIM and legacy TXT records are already verified in dashboardFollowComment"
amazonses.com did not encrypt the email
https://repost.aws/questions/QUnCg8s_KJQDCAoZ_T2nhWAw/amazonses-com-did-not-encrypt-the-email
false
"0See https://docs.aws.amazon.com/ses/latest/DeveloperGuide/security.html Amazon SES to Receiver.CommentSharemdibellaanswered 2 years ago0after applying the configuration set with TLS tick as required I stopped receiving emails, had to remove the configuration to start receiving the emailCommentSharecorusx9answered 2 years ago0That means SES cannot negotiate TLS with the receiving server. This could be caused by a number of conditions on the receiving server, including:not having a properly installed leaf certificate and any intermediatescertificate installed is expiredcertificate installed is signed by issuer that SES does not trustnot supporting TLS 1.0, 1.1, or 1.2not supporting a common cipher with SESYou can trying testing the receiver using this https://www.checktls.com/TestReceiver but passing the test there does guarantee common cipher sets. I don't see that Amazon publishes the cipher sets it supports.CommentSharemdibellaanswered 2 years ago0this helped me a lot, so the receiving server was only accepting 1.0 no idea how it disabled other tls versions anyway got it fixed thanksedit 1.0Edited by: corusx9 on Nov 22, 2021 6:53 AMCommentSharecorusx9answered 2 years ago"
"I created Ubuntu 22.04 live server on my ESXI.Then I've exported to OVF + vmdk.I've uploaded these 2 files into my S3 bucket.Then used aws app to import this image into my AMI so I could launch a new VM based on that image. This is my json for import-image:[ { "Description": "My vmdk", "Format": "vmdk", "Url": "s3://my-bucket/path/to/vm.vmdk" }]But aws ec2 import-image failed. I could see the error using aws ec2 describe-import-image-tasks:"Status": "deleted","StatusMessage": "ClientError: We were unable to read your import's initramfs/initrd to determine what drivers your import requires to run in EC2.",Now, as I said, my image is based on Ubuntu 22.04 live-server.Here I saw it might not be supported yet.But on the other hand, there's an image of Ubuntu 22.04 in EC2.Questions:According to my brief process - did I do anything wrong? Can I somehow prepare the image so the "initramfs/initrd" would be fine?Is Ubuntu 22.04 supported or not? If not, when will it be?Can I somehow import my image in another way that will pass that error? My base image is an .isoFollowComment"
aws ec2 import-image failed -> ClientError: We were unable to read your import's initramfs/initrd to determine what drivers your import requires to run in EC2
https://repost.aws/questions/QUktI4aBnEQWOPeedaGYdguA/aws-ec2-import-image-failed-clienterror-we-were-unable-to-read-your-import-s-initramfs-initrd-to-determine-what-drivers-your-import-requires-to-run-in-ec2
false
"Hi there,Understand that a normal status general-purpose Private CA, it costs USD$400 per private CA per month. https://aws.amazon.com/private-ca/pricing/And I need to understand for a DISABLED general-purpose Private CA (using the Disable button from the Actions), how much it costs per month.This's to understand and better plan my billing.Thanks,FollowComment"
How much will a disabled general-purpose Private CA cost per month?
https://repost.aws/questions/QUsmM6OvhdRvqN5SyPanqyFw/how-much-will-a-disabled-general-purpose-private-ca-cost-per-month
true
"1Accepted AnswerHi,please refer to https://aws.amazon.com/private-ca/pricing/ and https://docs.aws.amazon.com/privateca/latest/userguide/PCAUpdateCA.html.In the latter documentation you can see that if the certificate is DISABLED you are still charged for it.In summary, you pay for a Private CA certificate for the days from the time you create it until the certificate is deleted from PCA.CommentShareEXPERTMassimilianoAWSanswered 4 months agoEXPERTTushar_Jreviewed 4 months agoYangxi-NDI 4 months agoThanks, the links are helpful.Share"
Is there a way to programmatically detect which region AWS SSO is enabled in? I dont see anything in the APIs. However I see that AWS Console displays the region in which it is enabled by making a call to :https://us-east-1.console.aws.amazon.com/singlesignon/api/peregrineoperation: "DescribeRegisteredRegions"path: "/control/"Is there anyway we can achieve the same via APIs?FollowComment
Programmatically Detect which region AWS SSO is enabled in.
https://repost.aws/questions/QUR4HxOw4HQE689po1UFkY0w/programmatically-detect-which-region-aws-sso-is-enabled-in
false
0I don't see that this is available in existing APIs. The way I have solved similar situations in the past is to populate a Parameter Store parameter in each of my active regions that defines the SSO region. Another option is Secrets Manager and have it replicate the secret to your active regions. Now you can get the SSO region via an API call.CommentShareEXPERTkentradanswered a year ago
"I need to run an earlier version of Visio in my AppStream.https://support.microsoft.com/en-us/topic/how-to-revert-to-an-earlier-version-of-office-2bd5c457-a917-d57e-35a1-f709e3dda841Tells how to do this and I have successfully done it on my local system.When I attempt in ImageBuilder, it pretends to work but doesn't.Is there a way to do this?FollowComment"
Is there a way to force load earlier versions of Office Apps in AppStream Image Builder
https://repost.aws/questions/QUZO1sNL51TSWOhMf_IARlpA/is-there-a-way-to-force-load-earlier-versions-of-office-apps-in-appstream-image-builder
true
"0Accepted AnswerImage Builders do not come with Office pre-installed on it. However, if you have Office installed on the Image Builder, you would be able to use the same process on the Microsoft documentation to revert back to the previous Visio version. You must do this with the Administrators account on the Image Builder.Considering that AppStream 2.0 is non-persistent VDI platform which means that each session will create new Instances from Images created from Image Builder. Office Licensing and mobility would not be ideal hence, Office settings on the Image builder might not persist.I would recommend using Use Session Scripts to Manage Your AppStream 2.0 to download the Office Deployment tools, save the config.exe file and run them upon start of a new session.This ensures that before new sessions are provisioned, new Instances would be configured to use the previous versions of the Visio.Note that Office license must have the license mobility to allow multiple use on different VDIsFeel free to reach out to the Application provider should this not on AppStream 2.0 or if the solution does not support non-VDI. You can also contact AWS Premium Support. See article for steps https://aws.amazon.com/premiumsupport/knowledge-center/aws-phone-support/CommentShareChibuike_Aanswered a year ago0Thanks Chibuike_A -- Using Session Scripts... looks like a good next attempt for me. Will take a while for me to fully digest and test, but will "accept" for now. I did use the MS doc reference running as Admin, but as noted in original post it pretends to work, but really doesn't.CommentSharePaulv45answered a year ago"
"Created a snapshot of my instance and made some unwanted changes in DB. When I restore backup from snapshot backup, it is failed due to same "DB Instance Identifier". How to restore snapshot to existing instance without deleted old one?FollowComment"
AWS restore rds snapshot to existing instance
https://repost.aws/questions/QUeYbW-QYdSUyNJwShCI8Bnw/aws-restore-rds-snapshot-to-existing-instance
false
0Restore from snapshot creates a new database resource. But you can rename the old and new DB instances if you are trying to restore from a backup and want to retain the original db identifier.CommentShareMODERATORphilawsanswered 5 months agorePost-User-1944103 5 months agoHow to backup and restore the old and new DB instance without rename/deletion??Share0You cannot restore from a snapshot to an existing DB instance. Ref: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.htmlCommentShareSydanswered 5 months agoEXPERTkentradreviewed 5 months ago
"Hi All,I'm trying to create a WebAcl waf association with a ALB using Jenkins Ci/CD. The Jenkins user has full admin permissions on the account. I've even added:- PolicyName: Regional-Waf PolicyDocument: Version: "2012-10-17" Statement: Effect: Allow Action: - wafv2:* Resource: "*"Just to make sure... But... I get the following cloudformation AFTER the association is created...Resource handler returned message: "User: arn:aws:iam::${AWS::AccountId}:user/Jenkins is not authorized to perform: wafv2:GetWebACL on resource: arn:aws:wafv2:${AWS::Region}:${AWS::AccountId}:regional/webacl/waf-webacl-qa/789b4eed-77cf-4108-918f-0fa016a14cf7 with an explicit deny in an identity-based policy (Service: Wafv2, Status Code: 400, Request ID: ccba5209-7fb7-4ac9-b358-90131bf45e3d, Extended Request ID: null)" (RequestToken: 0bdbad29-c5b9-7fcc-51f6-fe011d6b8057, HandlerErrorCode: GeneralServiceException)So, YES to association is created by the Jenkins user. But immediately after that, cfn gives this error...The WAF is Regional. So no cloudfront.FollowComment"
Cloudformation WAF Association
https://repost.aws/questions/QUMPKbTNlYT6yS7dC_QtVpIw/cloudformation-waf-association
false
0Found the fix... Just don't understand it... I had a Ip restricion policy attached to the Jenkins user with the IP's of the agents and master... - PolicyName: IPRestricteddPolicyForServiceAccounts PolicyDocument: Version: "2012-10-17" Statement: Effect: Deny Action: "*" Resource: "*" Condition: NotIpAddress: aws:SourceIp: - *******/32 - ********/32 - *******/32 - *******/32Removing this one fixed it... But can somebody explain to me... Why?CommentSharerePost-User-7692897answered 9 months ago
"Hi,I have an existing AWS Backup setup for Aurora, which I created via the console UI. I have now put together a cloudformation template for that which I'd like to import - I'm following through the import with existing resources wizard, but hitting an error I'm unable to understand.After selecting the new template I am asked to enter on the UIAWS::Backup::BackupVault - BackupVaultNameAWS::Backup::BackupPlan - BackupPlanIdAWS::Backup::BackupSelection - IdOn entering these value and then hitting next a few times to get to the final screen. It will load for a few moments calculating the change set and then say"Backup Plan ID and Selection ID must be provided"Although I do enter those values during the wizard. Any suggestions? ThanksTemplate below - This work all as expected if the Backup Plan does not currently existAWSTemplateFormatVersion: 2010-09-09Description: >- Create RDS BackupParameters: OnlyCreateVault: Description: This is for the DR region. Only other required parameters are Environment and CostAllocation Type: String Default: false AllowedValues: [true, false] DestinationBackupVaultArn: Type: String ResourceSelectionIamRoleArn: Type: String ResourceSelectionArn: Description: Comma separated list of resource ARNs Type: String CostAllocation: Type: String AllowedValues: - 'Dev' - 'Demo' - 'Test' - 'Live' Environment: Type: String AllowedValues: - 'develop' - 'testing' - 'testenv' - 'demo' - 'live' - 'dr'Conditions: CreateAllResources: !Equals [!Ref OnlyCreateVault, false] Resources: Vault: Type: AWS::Backup::BackupVault DeletionPolicy: Delete Properties: BackupVaultName: !Sub backup-vault-${Environment}-rds-1 BackupVaultTags: CostAllocation: !Ref CostAllocation Plan: Condition: CreateAllResources Type: AWS::Backup::BackupPlan DeletionPolicy: Delete Properties: BackupPlan: BackupPlanName: !Sub backup-plan-${Environment}-rds-1 BackupPlanRule: - RuleName: !Sub backup-rule-${Environment}-daily-1 CompletionWindowMinutes: 720 CopyActions: - DestinationBackupVaultArn: !Ref DestinationBackupVaultArn Lifecycle: DeleteAfterDays: 7 EnableContinuousBackup: true Lifecycle: DeleteAfterDays: 35 StartWindowMinutes: 120 ScheduleExpression: cron(0 1 ? * * *) TargetBackupVault: !Sub backup-vault-${Environment}-rds-1 - RuleName: !Sub backup-rule-${Environment}-weekly-1 CompletionWindowMinutes: 720 CopyActions: - DestinationBackupVaultArn: !Ref DestinationBackupVaultArn Lifecycle: DeleteAfterDays: 35 EnableContinuousBackup: false Lifecycle: DeleteAfterDays: 42 StartWindowMinutes: 120 ScheduleExpression: cron(0 1 ? * * *) TargetBackupVault: !Sub backup-vault-${Environment}-rds-1 - RuleName: !Sub backup-rule-${Environment}-monthly-1 CompletionWindowMinutes: 720 CopyActions: - DestinationBackupVaultArn: !Ref DestinationBackupVaultArn Lifecycle: MoveToColdStorageAfterDays: 365 EnableContinuousBackup: false Lifecycle: DeleteAfterDays: 365 StartWindowMinutes: 120 ScheduleExpression: cron(0 1 ? * * *) TargetBackupVault: !Sub backup-vault-${Environment}-rds-1 BackupPlanTags: CostAllocation: Ref: CostAllocation ResourceSelection: Condition: CreateAllResources Type: AWS::Backup::BackupSelection DeletionPolicy: Delete Properties: BackupPlanId: !Ref Plan BackupSelection: IamRoleArn: !Ref ResourceSelectionIamRoleArn Resources: !Split [",", !Ref ResourceSelectionArn] SelectionName: !Sub backup-resource-${Environment}-rds-1 FollowComment"
CloudFormation - Importing existing AWS Backup
https://repost.aws/questions/QUSNkRc8e8ROy_AqgCSEvQWg/cloudformation-importing-existing-aws-backup
false
"Hi,I'm trying since a few hours to install the PHP intl extension without success.Actual version running on my instance is :PHP 7.2.23 (cli) (built: Oct 21 2019 17:24:05) ( NTS )Copyright (c) 1997-2018 The PHP GroupZend Engine v3.2.0, Copyright (c) 1998-2018 Zend TechnologiesI have checked if the extension is already existing with info.php but no intl extension installed.Among other I havesudo yum install php7.2-intlbut the system tell me :[root@whm /]# sudo yum install php7.2-intlLoaded plugins: fastestmirror, universal-hooksLoading mirror speeds from cached hostfile * EA4: 203.174.85.202 * cpanel-addons-production-feed: 203.174.85.202 * cpanel-plugins: 203.174.85.202 * base: centos.mirrors.estointernet.in * extras: centos.mirrors.estointernet.in * updates: centos.mirrors.estointernet.inNo package php7.2-intl available.Error: Nothing to doI have enabled the extension in my php.ini file incd /opt/cpanel/ea-php72/root/etcnano php.iniextension=intl.soI need this extension for my Laravel 5.7 extension which now throw me errors :use IntlDateFormatter;Error :Class 'IntlDateFormatter' not foundI'm stuck, would appreciate expertise here. Thanks in advance, cheersEdited by: marcQ on Oct 29, 2019 4:51 PMFollowComment"
Unable to Install intl PHP extension on my AWS EC2 (CPANEL) instance
https://repost.aws/questions/QU83StdMfETUKckymLL6l5Zg/unable-to-install-intl-php-extension-on-my-aws-ec2-cpanel-instance
false
"0That might help :Using username "webamazingapps".Authenticating with public key "imported-openssh-key"Passphrase for key "imported-openssh-key":Last login: Tue Oct 29 09:01:43 2019 from 113.161.59.72[webamazingapps@whm ~]$ php -iphpinfo()PHP Version => 7.2.23System => Linux whm.webamazingapps.com 3.10.0-1062.4.1.el7.x86_64 #1 SMP Fri Oct 18 17:15:30 UTC 2019 x86_64Build Date => Oct 21 2019 17:21:07Configure Command => './configure' '--build=x86_64-redhat-linux-gnu' '--host=x 86_64-redhat-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--p refix=/opt/cpanel/ea-php72/root/usr' '--exec-prefix=/opt/cpanel/ea-php72/root/us r' '--bindir=/opt/cpanel/ea-php72/root/usr/bin' '--sbindir=/opt/cpanel/ea-php72/ root/usr/sbin' '--sysconfdir=/opt/cpanel/ea-php72/root/etc' '--datadir=/opt/cpan el/ea-php72/root/usr/share' '--includedir=/opt/cpanel/ea-php72/root/usr/include' '--libdir=/opt/cpanel/ea-php72/root/usr/lib64' '--libexecdir=/opt/cpanel/ea-php 72/root/usr/libexec' '--localstatedir=/opt/cpanel/ea-php72/root/usr/var' '--shar edstatedir=/opt/cpanel/ea-php72/root/usr/com' '--mandir=/opt/cpanel/ea-php72/roo t/usr/share/man' '--infodir=/opt/cpanel/ea-php72/root/usr/share/info' '--cache-f ile=../config.cache' '--with-libdir=lib64' '--with-config-file-path=/opt/cpanel/ ea-php72/root/etc' '--with-config-file-scan-dir=/opt/cpanel/ea-php72/root/etc/ph p.d' '--disable-debug' '--with-password-argon2=/opt/cpanel/libargon2' '--with-pi c' '--without-pear' '--with-bz2' '--with-freetype-dir=/usr' '--with-png-dir=/usr ' '--with-xpm-dir=/usr' '--enable-gd-native-ttf' '--without-gdbm' '--with-gettex t' '--with-iconv' '--with-jpeg-dir=/usr' '--with-openssl=/opt/cpanel/ea-openssl' '--with-openssl-dir=/opt/cpanel/ea-openssl' '--with-pcre-regex=/usr' '--with-zl ib' '--with-layout=GNU' '--enable-exif' '--enable-ftp' '--enable-sockets' '--wit h-kerberos' '--enable-shmop' '--with-libxml-dir=/opt/cpanel/ea-libxml2' '--with- system-tzdata' '--with-mhash' '--libdir=/opt/cpanel/ea-php72/root/usr/lib64/php' '--enable-pcntl' '--enable-opcache' '--disable-opcache-file' '--enable-phpdbg' '--with-imap=shared,/opt/cpanel/ea-php72/root/usr' '--with-imap-ssl' '--enable-m bstring=shared' '--enable-mbregex' '--with-webp-dir=/usr' '--with-gd=shared' '-- with-gmp=shared' '--enable-calendar=shared' '--enable-bcmath=shared' '--with-bz2 =shared' '--enable-ctype=shared' '--enable-dba=shared' '--with-db4=/usr' '--with -tcadb=/usr' '--enable-exif=shared' '--enable-ftp=shared' '--with-gettext=shared ' '--with-iconv=shared' '--enable-sockets=shared' '--enable-tokenizer=shared' '- -with-xmlrpc=shared' '--with-ldap=shared' '--with-ldap-sasl' '--enable-mysqlnd=s hared' '--with-mysqli=shared,mysqlnd' '--with-mysql-sock=/var/lib/mysql/mysql.so ck' '--enable-dom=shared' '--with-pgsql=shared' '--enable-simplexml=shared' '--e nable-xml=shared' '--enable-wddx=shared' '--with-snmp=shared,/usr' '--enable-soa p=shared' '--with-xsl=shared,/usr' '--enable-xmlreader=shared' '--enable-xmlwrit er=shared' '--with-curl=shared,/opt/cpanel/libcurl' '--enable-pdo=shared' '--wit h-pdo-odbc=shared,unixODBC,/usr' '--with-pdo-mysql=shared,mysqlnd' '--with-pdo-p gsql=shared,/usr' '--with-pdo-sqlite=shared,/usr' '--with-sqlite3=shared,/usr' ' --enable-json=shared' '--enable-zip=shared' '--without-readline' '--with-libedit ' '--with-pspell=shared' '--enable-phar=shared' '--with-tidy=shared,/opt/cpanel/ libtidy' '--enable-sysvmsg=shared' '--enable-sysvshm=shared' '--enable-sysvsem=s hared' '--enable-shmop=shared' '--enable-posix=shared' '--with-unixODBC=shared,/ usr' '--enable-intl=shared' '--with-icu-dir=/usr' '--with-enchant=shared,/usr' ' --with-recode=shared,/usr' '--enable-fileinfo=shared' 'build_alias=x86_64-redhat -linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall -Wp, -D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-siz e=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -m tune=generic -fno-strict-aliasing -Wno-pointer-sign' 'CXXFLAGS=-O2 -g -pipe -Wal l -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buff er-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 - m64 -mtune=generic'Server API => Command Line InterfaceVirtual Directory Support => disabledConfiguration File (php.ini) Path => /opt/cpanel/ea-php72/root/etcLoaded Configuration File => /opt/cpanel/ea-php72/root/etc/php.iniScan this dir for additional .ini files => /opt/cpanel/ea-php72/root/etc/php.dAdditional .ini files parsed => /opt/cpanel/ea-php72/root/etc/php.d/20-bcmath.in i,/opt/cpanel/ea-php72/root/etc/php.d/20-calendar.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-ctype.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-curl.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-dom.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-ftp.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-gd.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-iconv.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-imap.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-json.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-mysqlnd.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-pdo.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-phar.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-posix.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-simplexml.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-sockets.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-sqlite3.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-tokenizer.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-xml.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-xmlwriter.ini,/opt/cpanel/ea-php72/root/etc/php.d/20-xsl.ini,/opt/cpanel/ea-php72/root/etc/php.d/30-mysqli.ini,/opt/cpanel/ea-php72/root/etc/php.d/30-pdo_mysql.ini,/opt/cpanel/ea-php72/root/etc/php.d/30-pdo_sqlite.ini,/opt/cpanel/ea-php72/root/etc/php.d/30-wddx.ini,/opt/cpanel/ea-php72/root/etc/php.d/30-xmlreader.ini,/opt/cpanel/ea-php72/root/etc/php.d/zzzzzzz-pecl.iniPHP API => 20170718PHP Extension => 20170718Zend Extension => 320170718Zend Extension Build => API320170718,NTSPHP Extension Build => API20170718,NTSDebug Build => noThread Safety => disabledZend Signal Handling => enabledZend Memory Manager => enabledZend Multibyte Support => disabledIPv6 Support => enabledDTrace Support => disabledRegistered PHP Streams => https, ftps, compress.zlib, php, file, glob, data, htt p, ftp, pharRegistered Stream Socket Transports => tcp, udp, unix, udg, ssl, tls, tlsv1.0, t lsv1.1, tlsv1.2Registered Stream Filters => zlib.*, string.rot13, string.toupper, string.tolowe r, string.strip_tags, convert.*, consumed, dechunk, convert.iconv.*This program makes use of the Zend Scripting Language Engine:Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies _______________________________________________________________________ConfigurationbcmathBCMath support => enabledDirective => Local Value => Master Valuebcmath.scale => 0 => 0calendarCalendar support => enabledCorePHP Version => 7.2.23Directive => Local Value => Master Valueallow_url_fopen => On => Onallow_url_include => Off => Offarg_separator.input => & => &arg_separator.output => & => &auto_append_file => no value => no valueauto_globals_jit => On => Onauto_prepend_file => no value => no valuebrowscap => no value => no valuedefault_charset => UTF-8 => UTF-8default_mimetype => text/html => text/htmldisable_classes => no value => no valuedisable_functions => no value => no valuedisplay_errors => Off => Offdisplay_startup_errors => Off => Offdoc_root => no value => no valuedocref_ext => no value => no valuedocref_root => no value => no valueenable_dl => Off => Offenable_post_data_reading => On => Onerror_append_string => no value => no valueerror_log => error_log => error_logerror_prepend_string => no value => no valueerror_reporting => 32759 => 32759expose_php => Off => Offextension_dir => /opt/cpanel/ea-php72/root/usr/lib64/php/modules => /opt/cpanel/ ea-php72/root/usr/lib64/php/modulesfile_uploads => On => Onhard_timeout => 2 => 2highlight.comment => <font style="color: #FF8000">#FF8000</font> => <font style= "color: #FF8000">#FF8000</font>highlight.default => <font style="color: #0000BB">#0000BB</font> => <font style= "color: #0000BB">#0000BB</font>highlight.html => <font style="color: #000000">#000000</font> => <font style="co lor: #000000">#000000</font>highlight.keyword => <font style="color: #007700">#007700</font> => <font style= "color: #007700">#007700</font>highlight.string => <font style="color: #DD0000">#DD0000</font> => <font style=" color: #DD0000">#DD0000</font>html_errors => Off => Offignore_repeated_errors => Off => Offignore_repeated_source => Off => Offignore_user_abort => Off => Offimplicit_flush => On => Oninclude_path => .:/opt/cpanel/ea-php72/root/usr/share/pear => .:/opt/cpanel/ea-p hp72/root/usr/share/pearinput_encoding => no value => no valueinternal_encoding => no value => no valuelog_errors => On => Onlog_errors_max_len => 1024 => 1024mail.add_x_header => On => Onmail.force_extra_parameters => no value => no valuemail.log => no value => no valuemax_execution_time => 0 => 0max_file_uploads => 20 => 20max_input_nesting_level => 64 => 64max_input_time => -1 => -1max_input_vars => 1000 => 1000memory_limit => 512M => 512Mopen_basedir => no value => no valueoutput_buffering => 0 => 0output_encoding => no value => no valueoutput_handler => no value => no valuepost_max_size => 256M => 256Mprecision => 14 => 14realpath_cache_size => 4096K => 4096Krealpath_cache_ttl => 120 => 120register_argc_argv => On => Onreport_memleaks => On => Onreport_zend_debug => Off => Offrequest_order => GP => GPsendmail_from => no value => no valuesendmail_path => /usr/sbin/sendmail -t -i => /usr/sbin/sendmail -t -iserialize_precision => 100 => 100short_open_tag => On => OnSMTP => localhost => localhostsmtp_port => 25 => 25sys_temp_dir => no value => no valuetrack_errors => Off => Offunserialize_callback_func => no value => no valueupload_max_filesize => 256M => 256Mupload_tmp_dir => no value => no valueuser_dir => no value => no valueuser_ini.cache_ttl => 300 => 300user_ini.filename => .user.ini => .user.inivariables_order => GPCS => GPCSxmlrpc_error_number => 0 => 0xmlrpc_errors => Off => Offzend.assertions => -1 => -1zend.detect_unicode => On => Onzend.enable_gc => On => Onzend.multibyte => Off => Offzend.script_encoding => no value => no valuezend.signal_check => Off => Offctypectype functions => enabledcurlcURL support => enabledcURL Information => 7.66.0Age => 5FeaturesAsynchDNS => YesCharConv => NoDebug => NoGSS-Negotiate => NoIDN => NoIPv6 => Yeskrb4 => NoLargefile => Yeslibz => YesNTLM => YesNTLMWB => YesSPNEGO => YesSSL => YesSSPI => NoTLS-SRP => YesHTTP2 => YesGSSAPI => YesKERBEROS5 => YesUNIX_SOCKETS => YesPSL => NoProtocols => dict, file, ftp, ftps, gopher, http, https, imap, imaps, pop3, pop3 s, rtsp, scp, sftp, smb, smbs, smtp, smtps, telnet, tftpHost => x86_64-redhat-linux-gnuSSL Version => OpenSSL/1.0.2tZLib Version => 1.2.7libSSH Version => libssh2/1.4.3datedate/time support => enabledtimelib version => 2017.09"Olson" Timezone Database Version => 0.systemTimezone Database => internalDefault timezone => UTCDirective => Local Value => Master Valuedate.default_latitude => 31.7667 => 31.7667date.default_longitude => 35.2333 => 35.2333date.sunrise_zenith => 90.583333 => 90.583333date.sunset_zenith => 90.583333 => 90.583333date.timezone => UTC => UTCdomDOM/XML => enabledDOM/XML API Version => 20031129libxml Version => 2.9.7HTML Support => enabledXPath Support => enabledXPointer Support => enabledSchema Support => enabledRelaxNG Support => enabledfilterInput Validation and Filtering => enabledRevision => $Id: 5a34caaa246b9df197f4b43af8ac66a07464fe4b $Directive => Local Value => Master Valuefilter.default => unsafe_raw => unsafe_rawfilter.default_flags => no value => no valueftpFTP support => enabledFTPS support => enabledgdGD Support => enabledGD Version => bundled (2.1.0 compatible)FreeType Support => enabledFreeType Linkage => with freetypeFreeType Version => 2.4.11GIF Read Support => enabledGIF Create Support => enabledJPEG Support => enabledlibJPEG Version => 6bPNG Support => enabledlibPNG Version => 1.5.13WBMP Support => enabledXPM Support => enabledlibXpm Version => 30411XBM Support => enabledWebP Support => enabledDirective => Local Value => Master Valuegd.jpeg_ignore_warning => 1 => 1hashhash support => enabledHashing Engines => md2 md4 md5 sha1 sha224 sha256 sha384 sha512/224 sha512/256 s ha512 sha3-224 sha3-256 sha3-384 sha3-512 ripemd128 ripemd160 ripemd256 ripemd32 0 whirlpool tiger128,3 tiger160,3 tiger192,3 tiger128,4 tiger160,4 tiger192,4 sn efru snefru256 gost gost-crypto adler32 crc32 crc32b fnv132 fnv1a32 fnv164 fnv1a 64 joaat haval128,3 haval160,3 haval192,3 haval224,3 haval256,3 haval128,4 haval 160,4 haval192,4 haval224,4 haval256,4 haval128,5 haval160,5 haval192,5 haval224 ,5 haval256,5MHASH support => EnabledMHASH API Version => Emulated Supporticonviconv support => enablediconv implementation => glibciconv library version => 2.17Directive => Local Value => Master Valueiconv.input_encoding => no value => no valueiconv.internal_encoding => no value => no valueiconv.output_encoding => no value => no valueimapIMAP c-Client Version => 2007fSSL Support => enabledKerberos Support => enabledDirective => Local Value => Master Valueimap.enable_insecure_rsh => Off => Offjsonjson support => enabledjson version => 1.6.0libxmllibXML support => activelibXML Compiled Version => 2.9.7libXML Loaded Version => 20907libXML streams => enabledmysqliMysqlI Support => enabledClient API library version => mysqlnd 5.0.12-dev - 20150407 - $Id: 3591daad22de0 8524295e1bd073aceeff11e6579 $Active Persistent Links => 0Inactive Persistent Links => 0Active Links => 0Directive => Local Value => Master Valuemysqli.allow_local_infile => Off => Offmysqli.allow_persistent => On => Onmysqli.default_host => no value => no valuemysqli.default_port => 3306 => 3306mysqli.default_pw => no value => no valuemysqli.default_socket => /var/lib/mysql/mysql.sock => /var/lib/mysql/mysql.sockmysqli.default_user => no value => no valuemysqli.max_links => Unlimited => Unlimitedmysqli.max_persistent => Unlimited => Unlimitedmysqli.reconnect => Off => Offmysqli.rollback_on_cached_plink => Off => Offmysqlndmysqlnd => enabledVersion => mysqlnd 5.0.12-dev - 20150407 - $Id: 3591daad22de08524295e1bd073aceef f11e6579 $Compression => supportedcore SSL => supportedextended SSL => supportedCommand buffer size => 4096Read buffer size => 32768Read timeout => 86400Collecting statistics => YesCollecting memory statistics => NoTracing => n/aLoaded plugins => mysqlnd,debug_trace,auth_plugin_mysql_native_password,auth_plu gin_mysql_clear_password,auth_plugin_sha256_passwordAPI Extensions => mysqli,pdo_mysqlmysqlnd statistics =>bytes_sent => 0bytes_received => 0packets_sent => 0packets_received => 0protocol_overhead_in => 0protocol_overhead_out => 0bytes_received_ok_packet => 0bytes_received_eof_packet => 0bytes_received_rset_header_packet => 0bytes_received_rset_field_meta_packet => 0bytes_received_rset_row_packet => 0bytes_received_prepare_response_packet => 0bytes_received_change_user_packet => 0packets_sent_command => 0packets_received_ok => 0packets_received_eof => 0packets_received_rset_header => 0packets_received_rset_field_meta => 0packets_received_rset_row => 0packets_received_prepare_response => 0packets_received_change_user => 0result_set_queries => 0non_result_set_queries => 0no_index_used => 0bad_index_used => 0slow_queries => 0buffered_sets => 0unbuffered_sets => 0ps_buffered_sets => 0ps_unbuffered_sets => 0flushed_normal_sets => 0flushed_ps_sets => 0ps_prepared_never_executed => 0ps_prepared_once_executed => 0rows_fetched_from_server_normal => 0rows_fetched_from_server_ps => 0rows_buffered_from_client_normal => 0rows_buffered_from_client_ps => 0rows_fetched_from_client_normal_buffered => 0rows_fetched_from_client_normal_unbuffered => 0rows_fetched_from_client_ps_buffered => 0rows_fetched_from_client_ps_unbuffered => 0rows_fetched_from_client_ps_cursor => 0rows_affected_normal => 0rows_affected_ps => 0rows_skipped_normal => 0rows_skipped_ps => 0copy_on_write_saved => 0copy_on_write_performed => 0command_buffer_too_small => 0connect_success => 0connect_failure => 0connection_reused => 0reconnect => 0pconnect_success => 0active_connections => 0active_persistent_connections => 0explicit_close => 0implicit_close => 0disconnect_close => 0in_middle_of_command_close => 0explicit_free_result => 0implicit_free_result => 0explicit_stmt_close => 0implicit_stmt_close => 0mem_emalloc_count => 0mem_emalloc_amount => 0mem_ecalloc_count => 0mem_ecalloc_amount => 0mem_erealloc_count => 0mem_erealloc_amount => 0mem_efree_count => 0mem_efree_amount => 0mem_malloc_count => 0mem_malloc_amount => 0mem_calloc_count => 0mem_calloc_amount => 0mem_realloc_count => 0mem_realloc_amount => 0mem_free_count => 0mem_free_amount => 0mem_estrndup_count => 0mem_strndup_count => 0mem_estrdup_count => 0mem_strdup_count => 0mem_edupl_count => 0mem_dupl_count => 0proto_text_fetched_null => 0proto_text_fetched_bit => 0proto_text_fetched_tinyint => 0proto_text_fetched_short => 0proto_text_fetched_int24 => 0proto_text_fetched_int => 0proto_text_fetched_bigint => 0proto_text_fetched_decimal => 0proto_text_fetched_float => 0proto_text_fetched_double => 0proto_text_fetched_date => 0proto_text_fetched_year => 0proto_text_fetched_time => 0proto_text_fetched_datetime => 0proto_text_fetched_timestamp => 0proto_text_fetched_string => 0proto_text_fetched_blob => 0proto_text_fetched_enum => 0proto_text_fetched_set => 0proto_text_fetched_geometry => 0proto_text_fetched_other => 0proto_binary_fetched_null => 0proto_binary_fetched_bit => 0proto_binary_fetched_tinyint => 0proto_binary_fetched_short => 0proto_binary_fetched_int24 => 0proto_binary_fetched_int => 0proto_binary_fetched_bigint => 0proto_binary_fetched_decimal => 0proto_binary_fetched_float => 0proto_binary_fetched_double => 0proto_binary_fetched_date => 0proto_binary_fetched_year => 0proto_binary_fetched_time => 0proto_binary_fetched_datetime => 0proto_binary_fetched_timestamp => 0proto_binary_fetched_string => 0proto_binary_fetched_json => 0proto_binary_fetched_blob => 0proto_binary_fetched_enum => 0proto_binary_fetched_set => 0proto_binary_fetched_geometry => 0proto_binary_fetched_other => 0init_command_executed_count => 0init_command_failed_count => 0com_quit => 0com_init_db => 0com_query => 0com_field_list => 0com_create_db => 0com_drop_db => 0com_refresh => 0com_shutdown => 0com_statistics => 0com_process_info => 0com_connect => 0com_process_kill => 0com_debug => 0com_ping => 0com_time => 0com_delayed_insert => 0com_change_user => 0com_binlog_dump => 0com_table_dump => 0com_connect_out => 0com_register_slave => 0com_stmt_prepare => 0com_stmt_execute => 0com_stmt_send_long_data => 0com_stmt_close => 0com_stmt_reset => 0com_stmt_set_option => 0com_stmt_fetch => 0com_deamon => 0bytes_received_real_data_normal => 0bytes_received_real_data_ps => 0opensslOpenSSL support => enabledOpenSSL Library Version => OpenSSL 1.0.2t 10 Sep 2019OpenSSL Header Version => OpenSSL 1.0.2t 10 Sep 2019Openssl default config => /opt/cpanel/ea-openssl/etc/pki/tls/openssl.cnfDirective => Local Value => Master Valueopenssl.cafile => no value => no valueopenssl.capath => no value => no valuepcntlpcntl support => enabledpcrePCRE (Perl Compatible Regular Expressions) Support => enabledPCRE Library Version => 8.32 2012-11-30PCRE JIT Support => enabledDirective => Local Value => Master Valuepcre.backtrack_limit => 1000000 => 1000000pcre.jit => 1 => 1pcre.recursion_limit => 100000 => 100000PDOPDO support => enabledPDO drivers => mysql, sqlitepdo_mysqlPDO Driver for MySQL => enabledClient API version => mysqlnd 5.0.12-dev - 20150407 - $Id: 3591daad22de08524295e 1bd073aceeff11e6579 $Directive => Local Value => Master Valuepdo_mysql.default_socket => /var/lib/mysql/mysql.sock => /var/lib/mysql/mysql.so ckpdo_sqlitePDO Driver for SQLite 3.x => enabledSQLite Library => 3.7.17PharPhar: PHP Archive support => enabledPhar EXT version => 2.0.2Phar API version => 1.1.1SVN revision => $Id: f1155e62742ca367e521a3e412667d8ee34eead9 $Phar-based phar archives => enabledTar-based phar archives => enabledZIP-based phar archives => enabledgzip compression => enabledbzip2 compression => disabled (install pecl/bz2)OpenSSL support => enabledPhar based on pear/PHP_Archive, original concept by Davey Shafik.Phar fully realized by Gregory Beaver and Marcus Boerger.Portions of tar implementation Copyright (c) 2003-2009 Tim Kientzle.Directive => Local Value => Master Valuephar.cache_list => no value => no valuephar.readonly => On => Onphar.require_hash => On => OnposixRevision => $Id: 0a764bab332255746424a1e6cfbaaeebab998e4c $readlineReadline Support => enabledReadline library => EditLine wrapperDirective => Local Value => Master Valuecli.pager => no value => no valuecli.prompt => \b \> => \b \>ReflectionReflection => enabledVersion => $Id: 012f23982d9d94728b4da252b9f21f9de8afd4df $sessionSession Support => enabledRegistered save handlers => files userRegistered serializer handlers => php_serialize php php_binary wddxDirective => Local Value => Master Valuesession.auto_start => Off => Offsession.cache_expire => 180 => 180session.cache_limiter => nocache => nocachesession.cookie_domain => no value => no valuesession.cookie_httponly => no value => no valuesession.cookie_lifetime => 0 => 0session.cookie_path => / => /session.cookie_secure => 0 => 0session.gc_divisor => 0 => 0session.gc_maxlifetime => 1440 => 1440session.gc_probability => 0 => 0session.lazy_write => On => Onsession.name => PHPSESSID => PHPSESSIDsession.referer_check => no value => no valuesession.save_handler => files => filessession.save_path => /var/cpanel/php/sessions/ea-php72 => /var/cpanel/php/sessio ns/ea-php72session.serialize_handler => php => phpsession.sid_bits_per_character => 4 => 4session.sid_length => 32 => 32session.upload_progress.cleanup => On => Onsession.upload_progress.enabled => On => Onsession.upload_progress.freq => 1% => 1%session.upload_progress.min_freq => 1 => 1session.upload_progress.name => PHP_SESSION_UPLOAD_PROGRESS => PHP_SESSION_UPLOA D_PROGRESSsession.upload_progress.prefix => upload_progress_ => upload_progress_session.use_cookies => 1 => 1session.use_only_cookies => 1 => 1session.use_strict_mode => 0 => 0session.use_trans_sid => 0 => 0SimpleXMLSimplexml support => enabledRevision => $Id: 341daed0ee94ea8f728bfd0ba4626e6ed365c0d1 $Schema support => enabledsocketsSockets Support => enabledSPLSPL support => enabledInterfaces => OuterIterator, RecursiveIterator, SeekableIterator, SplObserver, S plSubjectClasses => AppendIterator, ArrayIterator, ArrayObject, BadFunctionCallException, BadMethodCallException, CachingIterator, CallbackFilterIterator, DirectoryItera tor, DomainException, EmptyIterator, FilesystemIterator, FilterIterator, GlobIte rator, InfiniteIterator, InvalidArgumentException, IteratorIterator, LengthExcep tion, LimitIterator, LogicException, MultipleIterator, NoRewindIterator, OutOfBo undsException, OutOfRangeException, OverflowException, ParentIterator, RangeExce ption, RecursiveArrayIterator, RecursiveCachingIterator, RecursiveCallbackFilter Iterator, RecursiveDirectoryIterator, RecursiveFilterIterator, RecursiveIterator Iterator, RecursiveRegexIterator, RecursiveTreeIterator, RegexIterator, RuntimeE xception, SplDoublyLinkedList, SplFileInfo, SplFileObject, SplFixedArray, SplHea p, SplMinHeap, SplMaxHeap, SplObjectStorage, SplPriorityQueue, SplQueue, SplStac k, SplTempFileObject, UnderflowException, UnexpectedValueExceptionsqlite3SQLite3 support => enabledSQLite3 module version => 7.2.23SQLite Library => 3.7.17Directive => Local Value => Master Valuesqlite3.extension_dir => no value => no valuestandardDynamic Library Support => enabledPath to sendmail => /usr/sbin/sendmail -t -iDirective => Local Value => Master Valueassert.active => 1 => 1assert.bail => 0 => 0assert.callback => no value => no valueassert.exception => 0 => 0assert.quiet_eval => 0 => 0assert.warning => 1 => 1auto_detect_line_endings => 0 => 0default_socket_timeout => 60 => 60from => no value => no valuesession.trans_sid_hosts => no value => no valuesession.trans_sid_tags => a=href,area=href,frame=src,form= => a=href,area=href,f rame=src,form=url_rewriter.hosts => no value => no valueurl_rewriter.tags => a=href,area=href,frame=src,input=src,form=fakeentry => a=hr ef,area=href,frame=src,input=src,form=fakeentryuser_agent => no value => no valuetokenizerTokenizer Support => enabledwddxWDDX Support => enabledWDDX Session Serializer => enabledxmlXML Support => activeXML Namespace Support => activelibxml2 Version => 2.9.7xmlreaderXMLReader => enabledxmlwriterXMLWriter => enabledxslXSL => enabledlibxslt Version => 1.1.28libxslt compiled against libxml Version => 2.9.1EXSLT => enabledlibexslt Version => 1.1.28zlibZLib Support => enabledStream Wrapper => compress.zlib://Stream Filter => zlib.inflate, zlib.deflateCompiled Version => 1.2.7Linked Version => 1.2.7Directive => Local Value => Master Valuezlib.output_compression => Off => Offzlib.output_compression_level => -1 => -1zlib.output_handler => no value => no valueAdditional ModulesModule NameEnvironmentVariable => ValueXDG_SESSION_ID => 3418HOSTNAME => whm.webamazingapps.comSELINUX_ROLE_REQUESTED =>TERM => xtermSHELL => /bin/bashHISTSIZE => 1000SSH_CLIENT => 113.161.59.72 52999 22SELINUX_USE_CURRENT_RANGE =>SSH_TTY => /dev/pts/0USER => webamazingappsLS_COLORS => rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01 :cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=3 4;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01; 31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tz o=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz= 01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz 2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:* .sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01; 31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm =01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:* .tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=0 1;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:* .mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01; 35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv =01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.e mf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36: *.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg= 01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:MAIL => /var/spool/mail/webamazingappsPATH => /usr/local/cpanel/3rdparty/lib/path-bin:/usr/local/bin:/usr/bin:/usr/loc al/sbin:/usr/sbin:/opt/cpanel/composer/bin:/home/webamazingapps/.local/bin:/home /webamazingapps/binPWD => /home/webamazingappsLANG => en_US.UTF-8SELINUX_LEVEL_REQUESTED =>HISTCONTROL => ignoredupsSHLVL => 1HOME => /home/webamazingappsLOGNAME => webamazingappsSSH_CONNECTION => 113.161.59.72 52999 172.31.45.73 22LESSOPEN => ||/usr/bin/lesspipe.sh %sXDG_RUNTIME_DIR => /run/user/1002HISTTIMEFORMAT => %F %T_ => /usr/local/bin/phpPHP VariablesVariable => Value$_SERVER['XDG_SESSION_ID'] => 3418$_SERVER['HOSTNAME'] => whm.webamazingapps.com$_SERVER['SELINUX_ROLE_REQUESTED'] =>$_SERVER['TERM'] => xterm$_SERVER['SHELL'] => /bin/bash$_SERVER['HISTSIZE'] => 1000$_SERVER['SSH_CLIENT'] => 113.161.59.72 52999 22$_SERVER['SELINUX_USE_CURRENT_RANGE'] =>$_SERVER['SSH_TTY'] => /dev/pts/0$_SERVER['USER'] => webamazingapps$_SERVER['LS_COLORS'] => rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35 :bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:t w=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01; 31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.tx z=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz= 01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz =01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:* .ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01 ;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp =01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:* .tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=0 1;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:* .ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01; 35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli =01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.c gm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36 :*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc= 01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.x spf=01;36:$_SERVER['MAIL'] => /var/spool/mail/webamazingapps$_SERVER['PATH'] => /usr/local/cpanel/3rdparty/lib/path-bin:/usr/local/bin:/usr/ bin:/usr/local/sbin:/usr/sbin:/opt/cpanel/composer/bin:/home/webamazingapps/.loc al/bin:/home/webamazingapps/bin$_SERVER['PWD'] => /home/webamazingapps$_SERVER['LANG'] => en_US.UTF-8$_SERVER['SELINUX_LEVEL_REQUESTED'] =>$_SERVER['HISTCONTROL'] => ignoredups$_SERVER['SHLVL'] => 1$_SERVER['HOME'] => /home/webamazingapps$_SERVER['LOGNAME'] => webamazingapps$_SERVER['SSH_CONNECTION'] => 113.161.59.72 52999 172.31.45.73 22$_SERVER['LESSOPEN'] => ||/usr/bin/lesspipe.sh %s$_SERVER['XDG_RUNTIME_DIR'] => /run/user/1002$_SERVER['HISTTIMEFORMAT'] => %F %T$_SERVER['_'] => /usr/local/bin/php$_SERVER['PHP_SELF'] =>$_SERVER['SCRIPT_NAME'] =>$_SERVER['SCRIPT_FILENAME'] =>$_SERVER['PATH_TRANSLATED'] =>$_SERVER['DOCUMENT_ROOT'] =>$_SERVER['REQUEST_TIME_FLOAT'] => 1572393434.511$_SERVER['REQUEST_TIME'] => 1572393434$_SERVER['argv'] => Array()$_SERVER['argc'] => 0PHP LicenseThis program is free software; you can redistribute it and/or modifyit under the terms of the PHP License as published by the PHP Groupand included in the distribution in the file: LICENSEThis program is distributed in the hope that it will be useful,but WITHOUT ANY WARRANTY; without even the implied warranty ofMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.If you did not receive a copy of the PHP license, or have anyquestions about PHP licensing, please contact license@php.net.CommentSharemarcQanswered 4 years ago0Tried also this morning as root :yum install php-intlThank you for using cPanel & WHM![root@whm ~]# yum install php-intlLoaded plugins: fastestmirror, universal-hooksLoading mirror speeds from cached hostfile * EA4: 203.174.85.202 * cpanel-addons-production-feed: 203.174.85.202 * cpanel-plugins: 203.174.85.202 * base: centos.mirrors.estointernet.in * extras: centos.mirrors.estointernet.in * updates: centos.mirrors.estointernet.inNo package php-intl available.Error: Nothing to doBut no success.CommentSharemarcQanswered 4 years ago"
"Hello,I am trying to get DirectConnect VCF utilization statistics via the CLI.Below is the command I am running:$ aws cloudwatch get-metric-statistics --metric-name VirtualInterfaceBpsEgress --start-time 2022-01-07T10:00:00Z --end-time 2022-01-09T10:00:00Z --period 3600 --namespace AWS/DX --statistics Maximum --dimensions Name=ConnectionId,Value=XXXX Name=VirtualInterfaceId,Value=YYYYAnd it returns:{"Label": "VirtualInterfaceBpsEgress","Datapoints": []}I have tried modifying the period, start and end time, however I never seem to get any data out of it.I can see there is data fine in the webview.What am I doing wrong?Thanks.FollowComment"
DirectConnect VCF utilization get-metric-statistics not returning any data
https://repost.aws/questions/QUFB2W1Bc3QfK2xc0dzh7eEg/directconnect-vcf-utilization-get-metric-statistics-not-returning-any-data
false
"Hi,I've followed all the steps to set up my static HTML page served through S3. I can access it through S3 bucket just fine but Route 53 does not work. It only shows blank page.What gives?S3 bucket sample (works)http://crosecconsulting.com.s3-website-us-west-2.amazonaws.comRoute 53 (does not work)http://croseconsulting.comFollowComment"
Route 53 does not redirect to my static HTML in S3 bucket
https://repost.aws/questions/QUAWI5igzBQFK9ib_tjInKBg/route-53-does-not-redirect-to-my-static-html-in-s3-bucket
false
"0Hi,There are various documents about how to set up a static website in an S3 bucket. The best version I know (which I wrote ;-) ) is in the "Getting Started" topic in the Route 53 Developer Guide:https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/getting-started.htmlI ran a quick Whois query to find out who you had registered croseconsulting.com with and discovered that the domain hasn't been registered. You'll want to start there.ScottCommentShareEXPERTAWS-User-6179575answered 3 years ago0There was a typo in 2nd URL. It is the same as S3 bucket name (2 'c'). I followed the doc and double checked several times but the problem stays.The correct URL is:http://www.crosecconsulting.com...and it does not work. It just forwards to a blank page. Again, bucket DNS works just fine.Any other suggestions?CommentShareivan2015answered 3 years ago0Hi,The domain is currently using name servers from another DNS service to route your traffic, so any changes that you make to your Route 53 hosted zone won't have any effect:dns1.registrar-servers.comdns2.registrar-servers.comTo fix this, perform the following steps:Get the name servers that Route 53 assigned to your hosted zone when you created it. See "Getting the Name Servers for a Public Hosted Zone" in the Route 53 Developer Guide:https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/GetInfoAboutHostedZone.htmlUpdate your domain registration to use the name servers that you got in step 1. See "Adding or Changing Name Servers and Glue Records for a Domain":https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-name-servers-glue-records.htmlScottCommentShareEXPERTAWS-User-6179575answered 3 years ago0I transferred domain from another register into Route 53 (about week ago). The NS records are set correctly under the zone too so I am puzzled where is this information coming from?!?!15:51:26 war-room *(master) $ dig @ns-345.awsdns-43.com crosecconsulting.com NS; <<>> DiG 9.10.6 <<>> @ns-345.awsdns-43.com crosecconsulting.com NS; (1 server found);; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46321;; flags: qr aa rd; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1;; WARNING: recursion requested but not available;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;crosecconsulting.com.INNS;; ANSWER SECTION:crosecconsulting.com.172800INNSns-1522.awsdns-62.org.crosecconsulting.com.172800INNSns-1746.awsdns-26.co.uk.crosecconsulting.com.172800INNSns-345.awsdns-43.com.crosecconsulting.com.172800INNSns-942.awsdns-53.net.;; Query time: 28 msec;; SERVER: 205.251.193.89#53(205.251.193.89);; WHEN: Wed Dec 04 15:58:45 PST 2019;; MSG SIZE rcvd: 186CommentShareivan2015answered 3 years ago0Hi,I got it from an internal tool, but digwebinterface.com shows me the same thing:https://www.digwebinterface.com/?hostnames=crosecconsulting.com&type=&showcommand=on&colorize=on&stats=on&trace=on&sort=on&ns=resolver&useresolver=8.8.4.4&nameservers=crosecconsulting.com.172800INNSdns1.registrar-servers.com.crosecconsulting.com.172800INNSdns2.registrar-servers.com.During the process of transferring a domain registration to Route 53, you get the option to choose the name servers that you want to use. It appears that you chose to use name servers from registrar-servers.com rather than the name servers for your Route 53 hosted zone. Even if you have a Route 53 hosted zone when you transfer a domain to Route 53, we don't assume that you want to use that hosted zone.ScottCommentShareEXPERTAWS-User-6179575answered 3 years ago0Thank you very much for suggestion. The glue record as you stated was a problem. I do not recall selecting anything during the transfer but that is exactly what happened.CommentShareivan2015answered 3 years ago"
I want to install a package (loudgain) on either Amazon Linux 2 or Amazon Linux 2022 (in order to create a layer containing this binary that I can use to deploy a lambda.) It’s apparently available in the “fedora updates” package repository.How do I do this?FollowComment
How do I install a package from “fedora updates” on AL2 or AL2022?
https://repost.aws/questions/QUbWCZvgOCTw-PNocEonzGHQ/how-do-i-install-a-package-from-fedora-updates-on-al-2-or-al-2022
false
"0Hi Ratkins,I believe you want to install a loudgain package on Amazon Linux 2. To install a software package from “fedora updates”, you need to add the repository information to the /etc/yum.conf file or to its own repository.repo file in the /etc/yum.repos.d directory. You can do this manually, but most yum repositories provide their own repository.repo file at their repository URL. A documentation with guided steps has been provided please refer to it [1].Refer to references[1] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/add-repositories.htmlCommentShareLwazianswered a year agoratkins a year agoYes, I did that in the end (though it took quite a while to work out what the .repo file should look like.) But unfortunately the package I want wasn’t found in the repo, despite pkgs.org claiming it was.Share"
"Hi.I have to get an image from a private container registry with a login and password.I have set the secret in the Secrets Manager, but when I run the task I get:Asm fetching secret from the service for NXT/pwrdby_container_registry_login: AccessDeniedException: User: arn:aws:sts::<id>:assumed-role/ecsTaskExecutionRole/1a7f048f27274767bef37a1e4b97f458 is not authorized to perform: secretsmanager:GetSecretValue on resource: arn:aws:secretsmanager:us-east-1:<id>:secret:<secrete name> status code: 400, request id: a2e1d440-6aee-486f-a5d1-ae47b847ed42So, I went into the secrets manager and tried to edit the resource permissions to look like this:{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"590516527801"},"Action":"secretsmanager:GetSecretValue","Resource":"arn:aws:secretsmanager:us-east-1:<id>:secret:NXT/pwrdby_container_registry_login-DD5HwH"}]}However, this comes back with the same issue.What am i doing wrong?FollowComment"
Unable to get secret for login to external private Container Registry
https://repost.aws/questions/QUNbqQFA96Q9WtcY6mrD4O4g/unable-to-get-secret-for-login-to-external-private-container-registry
false
"0It turns out that in addition to the Secrets Manager setup, we had to setup IAM policies SecretManagerREadWrite to the TaskExecution roleCommentSharenihonnikanswered 3 years ago"
"I am testing my api with some aws managed waf rules. for testing i've put rate limit such that if there are more than 100 request per 5 minutes, my ip gets blocked. after the testing , once i make more than 100 request in a 5 minute period , how do i unblock my IP. can i do it from console.FollowComment"
can you remove an blocked ip in aws WAF?
https://repost.aws/questions/QUic4MuYigRpaLasRjevPi6g/can-you-remove-an-blocked-ip-in-aws-waf
false
"0You can list the IPs blocked by the rate limiting using the commands listed here: https://docs.aws.amazon.com/waf/latest/developerguide/listing-managed-ips.htmlYou can't delete IPs blocked by a rate limiting rule but they will be removed automatically once the rate from this IP drops below what you have specified in your rule.When the rule action triggers, AWS WAF applies the action to additional requests from the IP address until the request rate falls below the limit. It can take a minute or two for the action change to go into effect. - https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.htmlCommentSharerianbkanswered 8 months ago"
"If anyone from AWS support reads these forums, I opened xxxxxxxxxx* on 3/24 and have gotten 0 response to it yet. If someone can take it, I would greatly appreciate it.*Edit: Removed case ID — Kita B.FollowComment"
AWS Support not responding to my ticket xxxxxxxxxx
https://repost.aws/questions/QUVtZaIjCqRsaUCkNKP9zMlw/aws-support-not-responding-to-my-ticket-xxxxxxxxxx
false
"0Hi there,I've located your support case, which I observed is within the agreed timeframe as outlined on the following page:https://aws.amazon.com/premiumsupport/plans/Please note that response targets are calculated in business hours defined as 08:00 AM to 6:00 PM in the customer country, excluding holidays and weekends. We recommend you select the highest severities for cases that can't be worked around or that directly affect production applications. More information on choosing case severities can be reviewed here:https://docs.aws.amazon.com/awssupport/latest/user/case-management.htmlWith that said, I've flagged your case internally and requested that you receive assistance as soon as possible. Please monitor our Support Center for an update:https://go.aws/support-centerThank you for your patience!Best regards,— Kita B.CommentShareEXPERTAWS Support - Kitaanswered a year ago"
"BackgroundAmplify apps are easily extensible with Lambda functions, using amplify add function. Great!ProblemHow can I access the Amplify app ID from the Lambda function code? There are a lot of scenarios where I need that string in order to locate resources or access secrets in SSM.More generallyHow can my function do introspection on the app? How can I get the app ID from the Lambda function? Is there a service? Am I supposed to pass the information (somehow) through the CloudFormation template for the function?Due diligenceI've spent days trying to figure this out, and I have at least learned the secret, undocumented way to get anything in a nested CloudFormation stack's outputs into the parameters for my CloudFormation stack, so that I can create environment variables that my Lambda function can see.That does not solve my original problem of finding the top-level app ID. Or any information about the top-level app. If I could find the stack name for the top-level CloudFormation for the stack then I could learn a lot of things. I can't.How to pass stack outputs from app resources into function stack parametersI've spent days trying to figure this out, and I have at least learned the secret, undocumented way to use dependsOn in the backend-config.json to get the outputs from the CloudFormation stacks for other resources in the Amplify app and feed those into the parameters for my stack for my function: "function": { "MyFunctionName": { "build": true, "providerPlugin": "awscloudformation", "service": "Lambda", "dependsOn": [ { "category": "api", "resourceName": "Data", "attributes": [ "GraphQLAPIIdOutput" ] } ], } }}That creates a new parameter for your function that's named using a pattern that's not documented anywhere, from what I can tell: [category][resource name][CloudFormation stack output name]. You can reference that in your CloudFormation stack for your function to create an environment variable that your function code can access:{ "AWSTemplateFormatVersion": "2010-09-09", "Parameters": { ... "secretsPathAmplifyAppId": { "Type": "String" } ... "Resources": { ... "Environment": { "Variables": { "AMPLIFY_APP_ID": { "Ref": "secretsPathAmplifyAppId" },Using the AmplifyAppId in amplify-meta.json doesn't workIf I could access the provider / cloudformation data from a dependsOn then I could get the app ID into my function's stack. But that doesn't work. I spent some time eliminating that possibility.Using secretsPathAmplifyAppIdThere is a side effect of using amplify update function to add secrets. If you add any secret to the function then you will get a new parameter as an input to your function's CloudFormation stack: secretsPathAmplifyAppIdI did that and added a dummy secret that I don't really need, in order to get that CloudFormation stack parameter containing the Amplify App ID that I do need. And then I referenced that in my CloudFormation template for my function:{ "AWSTemplateFormatVersion": "2010-09-09", "Parameters": { ... "env": { "Type": "String" }, "s3Key": { "Type": "String" }, ... "secretsPathAmplifyAppId": { "Type": "String" }That works, right? No!If I create a new app in Amplify, perhaps deploying it to a staging or production account for the first time, then I'll get the error Parameters: [secretsPathAmplifyAppId] must have values from the initial build when I press "Save and Deploy" on the "Host your web app" form. This is because using secretsPathAmplifyAppId relies on the Amplify CLI adding the value to the team-provider-info.json file. For a new app's first deployment, "the team-provider-info.json file is not available in the Admin UI deployment job", as described in https://github.com/aws-amplify/amplify-cli/issues/8513 . And there is apparently no solution.WHY IS THIS SO HARD?!?The Amplify documentation implies that it's not difficult to add a Lambda function and do whatever. I'm a Lambda pro and a code pro, and I can do whatever. But only if I can pass context information to my code.How can an Amplify app's Lambda functions do introspection on the app?FollowComment"
How to pass the Amplify app ID to a function? How to do app introspection from backend functions?
https://repost.aws/questions/QUzLTfzx-mQmS15VMFVZBdDw/how-to-pass-the-amplify-app-id-to-a-function-how-to-do-app-introspection-from-backend-functions
true
"0Accepted AnswerHere's how I did it. AWS, please make this easier. IMHO, the app ID should be available to a function by default. If not, then it should be simpler than what I had to go through. It should at least be documented. And the documentation about what environment variables exist appears to be incorrect. Or else maybe I misunderstood it and that still seems like a problem.Variable handoff from build time to runtimeThe overall technique is to pass the AWS_APP_ID environment variable from the build-time environment to the Lamda runtime environment where it will be available to the Lambda function.AWS_APP_ID is available at build timeThis page in the Amplify documentation lists a set of environment variables that are supposed to be available at build time. It's not true. The only variable in the list that's available at build time is AWS_APP_ID. That's enough, though.Interpolate AWS_APP_ID into CloudFormation templateI cited the variable in my CloudFormation template for my function like this: "Environment": { "Variables": { "AWS_APP_ID": "{{AWS_APP_ID}}",Then I set up my build to use an off-the-shelf NPM to interpolate the variable into that. The {{}} syntax comes from that NPM. There is probably a way to do this with sed or something that could make the build process faster and not dependent on that NPM.Access environment variable from Lambda function normallyThen you can access the app ID from the Lambda function in the normal way for the function's runtime:const app_id = process.env.AWS_APP_IDconsole.log(`app_id: ${app_id}`)CommentShareTaoRyananswered a year ago"
"Hi aws re:Post!I used the AWS Timestream Web App Query Editor while developing my CDK IaC to test certain queries - specifically, I run the following and receive 1000 rows in response:SELECT measure_value::varchar FROM DATABASE.TABLE ORDER BY time LIMIT 1000However the same exact query sent in NodeJS via aws-sdk/TimestreamQuery returns 0 rows in response. If I remove ORDER BY time such that my query is:SELECT measure_value::varchar FROM DATABASE.TABLE LIMIT 1000the NodeJS Query client receives the same data as in the AWS Timestream Web App Query Editor. This is very bizarre behavior- I have not found any mention of it whatsoever in the context of AWS Timestream, and the only contextual results I found on the internet was MySQL Bug Report #70466 (accompanying stack overflow post).This really does not seem like a software configuration issue - to reiterate, if I omit ORDER BY time my NodeJS client behaves the same as my Web App client. Any insight on how to resolve this would be greatly appreciated!FollowComment"
Timestream Bug Report/ Help Request: `ORDER BY` works in Query Editor via Web App but not Query String sent via AWS SDK
https://repost.aws/questions/QU_YcL46ZmROClOMTNMJ7owg/timestream-bug-report-help-request-order-by-works-in-query-editor-via-web-app-but-not-query-string-sent-via-aws-sdk
false
0You will need to paginate on the response to get the full results. The AWS Timestream Web App Query Editor does the pagination and hence you see the results. A sample pagination code for the NodeJS query client is available at https://github.com/awslabs/amazon-timestream-tools/blob/mainline/sample_apps_reinvent2021/js/query-example.js#L244-L254. The pagination continues until the response does not contain a pagination token.CommentShareRajesh Iyer - AWSanswered a year ago
"HelloWhen I'm developing a component locally, I may have some folders in the root, such as .git, .github, etc.. However, these folders are not needed to run the component. My question is, while usinggdk component build to build the component, is there any way to specify which folders are needed in the resulting .zip file, and which should be igored?For example, the following figure shows the contents in the .zip file built by gdk. An expected behaviour is, only the folder/file marked by the blue arrows are included in the .zip file, whereas the rest are excluded.Thank you for your help!FollowComment"
How to ignore some folders while building a Greengrass component
https://repost.aws/questions/QUHj80QxRhQGWghShFTeAdEQ/how-to-ignore-some-folders-while-building-a-greengrass-component
true
"0Accepted AnswerHi cweng. I take it you're using the Zip Build system. There is no way to specify the folders to include. You can see here what GDK will ignore:https://github.com/aws-greengrass/aws-greengrass-gdk-cli/blob/a2b4a4b7b6867a39e12d52a2a3997b9a4a605ac2/gdk/commands/component/BuildCommand.py#L183https://github.com/aws-greengrass/aws-greengrass-gdk-cli/blob/a2b4a4b7b6867a39e12d52a2a3997b9a4a605ac2/gdk/commands/component/BuildCommand.py#L199-L229If you want more control, I think your best option will be to use a custom build type.CommentShareEXPERTGreg_Banswered 8 months agocweng 8 months agoThank you @Greg_B. Custom build should be the way to go.Share"
"I have a very simple lambda that uses the OpenAI API to send a prompt to ChatGPT.The lambda handler does not appear to wait for the request to complete. I've set the timeout in the lambda Configuration to 30 seconds.The same code when run from my linux command line with the same version of Node executes successfully (i.e. gets and displays a response) in about 2 seconds.Any insights very welcome - I've been beating up on this for over a week :(Thanks,DavidSample code:const { Configuration, OpenAIApi } = require("openai");exports.handler = async function(event, context, callback) { var output;try {output = getResponse(event.prompt);console.log('finished');}catch (e){output = e.message;}let response = {statusCode: 200,headers: { "Content-type" : "application/json" },body: JSON.stringify(output)};return response;};async function getResponse(prompt){ console.log("getResponse('" + prompt + "')"); const configuration = new Configuration({apiKey: "my-api-key-here",});const openai = new OpenAIApi(configuration);console.log('make request');await openai.createCompletion({model: "text-davinci-003",prompt: prompt,temperature: 0.7,max_tokens: 500,top_p: 1,frequency_penalty: 0,presence_penalty: 0,}).then((res) => {var text = res.data.choices[0].text;console.log('GPT3 says: ' + text);}).catch((error) => {console.log('in the catch clause: ' + error);});console.log('how did we get here?');}Input:{"prompt": "what did the fox say?"}Output:Test Event Namefox_testResponse{"statusCode": 200,"headers": {"Content-type": "application/json"},"body": "{}"}Function Logs2023-04-24T13:58:50.031+01:00START RequestId: 18ac9c86-d16b-46f2-8cbd-b4f3887429ae Version: $LATEST2023-04-24T13:58:50.063+01:002023-04-24T12:58:50.063Z 18ac9c86-d16b-46f2-8cbd-b4f3887429ae INFO getResponse('what did the fox say?')2023-04-24T13:58:50.063+01:002023-04-24T12:58:50.063Z 18ac9c86-d16b-46f2-8cbd-b4f3887429ae INFO make request2023-04-24T13:58:50.064+01:002023-04-24T12:58:50.064Z 18ac9c86-d16b-46f2-8cbd-b4f3887429ae INFO finished2023-04-24T13:58:50.124+01:00END RequestId: 18ac9c86-d16b-46f2-8cbd-b4f3887429ae2023-04-24T13:58:50.124+01:00REPORT RequestId: 18ac9c86-d16b-46f2-8cbd-b4f3887429ae Duration: 92.45 ms Billed Duration: 93 ms Memory Size: 128 MB Max Memory Used: 77 MBFollowComment"
How can I use async/await in a nodejs lambda?
https://repost.aws/questions/QUAw6GEdx7RGyv6hXgf11sYA/how-can-i-use-async-await-in-a-nodejs-lambda
true
"0Accepted AnswerHi,have you tried to return from method:return await openai.createCompletion?Also you could probably remove await from getResponse methodreturn openai.createCompletionand then add await in handler.output = await getResponse(event.prompt);Did similar work in this article: https://medium.com/@alatech/build-your-personal-speaker-assistant-with-amplify-and-chatgpt-8b6433fea042Hope it helps ;)CommentShareEXPERTalatechanswered a month agodmb0058 a month agoBrilliant! Thanks so much ... I almost see how this works and why my original code doesn't but not quite :) I'll experiment until I understand properly but it's so good to have this working.DavidShare"
"I'd like to know what's the best way to avoid unexpectedly high charges for a specific AWS service.SetupUsing AWS Polly text to speech serviceA specific IAM user with a specific policy, only allowing full (*) access to the particular serviceA budget action that assigns a read only policy to that user when a certain threshold of the monthly limit (in terms of costs) is reachedPossible problemThis setup works, but the problem is that once the threshold is reached, the execution of the budget action (here: assignment of the read only policy) takes quite a long time - I've tested it and it took about 20 hours(!) before the restricted policy was assigned.I've also looked into AWS Quotas, but they don't seem to be the right solution as they are a very generic way to control the amount of request. Additionally, they can't be reduced but only increased.Other solutions or steps to take?Is there any AWS related way to further prevent being charged large amounts of money when someone (be it deliberately or accidentally) abuses the service? In the worst case I'll have to cover costs for about a whole day (see above) before any restrictive automatic measurements will be taken.I can run the code from my backend and also implement some type of rate limiting, but still...FollowComment"
What is the best way to prevent unexpectedly high charges for a specific AWS service?
https://repost.aws/questions/QUVo1t1rUrRXuj7N9sbCQ2IA/what-is-the-best-way-to-prevent-unexpectedly-high-charges-for-a-specific-aws-service
false
"0Possible solutions could beUse AWS Budgets:In addition to the read-only policy approach you have already taken, you could set up AWS Budgets to monitor your costs for Polly. AWS Budgets can send you notifications when your costs exceed a certain threshold, giving you early warning of potential cost overruns.Use rate limiting:As you mentioned, you could implement rate limiting on your backend to restrict the number of API calls that can be made to Polly within a certain timeframe. This can help prevent excessive usage and reduce the risk of unexpected cost overruns.CommentSharerePost-User-3543171answered a month ago"
"Hi,I'm trying to write data from an RDS Aurora database (source) to S3 (target) using an AWS Glue job and I'm getting this error:py4j.protocol.Py4JJavaError: An error occurred while calling o107.pyWriteDynamicFrame.: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4, ip-172-30-x-x.ec2.internal, executor 1): com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failureThe last packet successfully received from the server was 129,902 milliseconds ago. The last packet sent successfully to the server was 129,902 milliseconds ago.at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)...Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.Any idea of what could be happening?FollowComment"
Failed ETL Job
https://repost.aws/questions/QUi1_DXWhBQomyCK4pC6Kq8w/failed-etl-job
false
"0Hi, I've got the same error when trying to ETL from S3 to RDS Aurora Servrless destination, but connection test in Glue UI is shown as successful.Job run ID: jr_d538e6bd5b30686c1d9a5b12fea74e131ab64308ad3c7ee03991ec0ebf2885e8Have you been able to resolve this issue?CommentSharenonmanifoldanswered 4 years ago"
"I Have admin access to AWS and trying to connect to amazon athena through Tableau desktop by providing access key id , security access key. but i am facing below error.An error occurred while communicating with Amazon AthenaInvalid username or password.Error Code: E88DBC3B[Simba]AthenaJDBC An error has been thrown from the AWS Athena client. The security token included in the request is invalid. [Execution ID not available]Invalid username or password.To avoid this error, I created IAM user & provided Amazon s3 , Athena Glue & Quick sight full access. Downloaded credentials for the IAM user.I am able to establish connection between tableau desktop and amazon Athena successfully with these IAM user credentials but i have to establish connection with Domain user credentials ( access key id, security access key) only, Please help me on this?FollowComment"
Tableau connection issue
https://repost.aws/questions/QUa6rPL1CCSNysXvQ2CxPowQ/tableau-connection-issue
false
"0So let me understand - you can connect OK with an IAM user's API keys, but want to connect instead with temporary federated domain credentials? To do that you need not only the access key id and secret access key for your session but the security token too.CommentShareEXPERTskinsmananswered 2 months agorePost-User-8013034 2 months agoBut I don't see option to provide security key in Tableau desktopShareskinsman EXPERT2 months agoIt looks like Tableau doesn't yet provide that option - see https://community.tableau.com/s/question/0D78b0000089p0MCAQ/detail?s1oid=00D4T000000Dj8G&t=1657735074796&s1nid=0DB4T000000GnPK for example.As suggested there, you could try connecting to Athena via ODBC.SharerePost-User-2134540 2 months agoHere domain user means , I created user through DSM tool ( Disk share management tool) and it has admin access.After connecting amazon web console with DSM user , able to access Athena and Query data without any error. but when I try to connect amazon Athena from Tableau desktop with this DSM user ( not IAM user, cannot see under IAM - Users) Access key id, Secret access key getting above error.SharerePost-User-2134540 2 months agoI have gone through AWS documents and other 3rd party documents also, I don't find any info related to DSM tool user , found only IAM user only. Is it possible to connect amazon Athena from local tableau desktop through DSM tool user?Shareskinsman EXPERT2 months agoNo, the only options are Access key ID & Secret access key from an IAM User (as per https://help.tableau.com/current/pro/desktop/en-us/examples_amazonathena.htm) or as I mentioned before, connecting via ODBC instead.Share"
"What happens to the EC2 Instance state (ex: stopped, pending, terminated), when an Availability Zone failure occurs where the EC2 instance was launched.FollowComment"
Availability Zone Failure
https://repost.aws/questions/QUGHfPkGDUSU-xyFcNr9IlmQ/availability-zone-failure
false
"0Availability Zones are a collection of many things: networking, management components, compute, storage, etc. - and all the other parts that tie them together. So it's very difficult to say "what happens during an Availability Zone failure" because - what's the failure you want to talk about?If we assume "the Availability Zone has been wiped off the map and is never coming back" then it's clear what the answer is. :-)But what if it is just a temporary network connectivity failure? That doesn't affect the EC2 instance state at all because nothing has happened to it. For an external observer (outside the Availability Zone) the instance is not available but it's actually running fine.And naturally there are many other failure modes with many other answers.So it's much better to look at the problem from another angle: What do you need the availability of the application to be? What components make up the application? How can I design my AWS infrastructure to ensure that I meet my availability and uptime requirements? At this point, thinking about the failure of an Availability Zone is much like other failures such as "what if the network is unavailable" or "what if the instance fails". And those are scenarios that we provide answers for: auto-scaling or networking that automatically routes around problems.But as above: It's not a question that there is a single, easy answer to.CommentShareEXPERTBrettski-AWSanswered 5 months ago"
"Hi AWS,I have deployed a backend application written in NodeJS on EC2 instance. Post that I have created the CI/CD pipeline for the same using Jenkins pipeline. I don't want to store the .env file on the source control (Bitbucket in my case), rather want to create it on the fly and use them as Jenkins secrets.However when the pipeline completes and resulted into the SUCCESS state the .env file is not getting stored on the EC2 instance where the repository is cloned. Can you please tell me what's the reason for that.FollowComment"
To fetch the Jenkins Secret from a Jenkins Server to EC2 instance under the specific location
https://repost.aws/questions/QUN663j5LfRM2nYjV-XBrPrw/to-fetch-the-jenkins-secret-from-a-jenkins-server-to-ec2-instance-under-the-specific-location
true
"1Accepted AnswerIf I interpret correctly, the .env is not part of the source control and so it is not intended to be present on EC2 as part of git clone (unless you mean it to be present there through git clone and have not specified the file under .gitignore). With a gitignore file, all files that start with a period ( . ) will be ignored.If you are creating a .env file on the fly in a jenkins workspace, you will need to copy the file across to EC2 explicitly by using something like "execute command over SSH" and then using "scp" or through "Publish Over SSH plugin" in Jenkins.CommentSharemehtanidanswered 4 months ago"
"Hi Everyone.I would like to know how redshift knows what schema a table should be created in when a database contains multiple schemas and the table is being created programmatically from a python script. I want to know this because I have a script where I create a table and copy data to it from S3. Both lines run without errors but when I log into AWS and check my cluster, the table is not there. I currently use a schema_name.table_name format when writing the create statement because I don't know how else to specify the schema the table should be created in.The relevant lines of code are as follows:sql = 'CREATE TABLE IF NOT EXISTS ' + schema_name.table_name + ' (column datatype, column datatype, column datatype);' cursor.execute(sql) sql = """COPY %s FROM '%s' access_key_id '%s' secret_access_key '%s' delimiter '%s' CSV ignoreheader 1 timeformat 'auto';""" % (table_name, s3_path_to_file_name, access_key_id, secret_key,delim) cursor.execute(sql)Like I said the code runs without errors but when I check the cluster, the table is not there.I would be extremely grateful for any hints that could help resolve this problem.FollowComment"
python redshift_connector executes create table statement with out errors but table not created in redshift cluster
https://repost.aws/questions/QUFoS1ncxmTCSZWZg2M0YYyQ/python-redshift-connector-executes-create-table-statement-with-out-errors-but-table-not-created-in-redshift-cluster
false
"1Abdul,When no schema name is provided while creating a table, Redshift relies on 'search_path' to retrieve the default one available. By default 'search_path' value is '$user', public. Here $user corresponds to the SESSION_USER. If it finds a schema name with the same name as session user, the table is created here. If redshift didn't find a schema, it looks for the next schema name in the list which is 'public'.If none of the names listed in 'search_path' has a corresponding schema, then create table operation will fail with error "no schema has been selected to create in".To confirm the table creation is successful, verify the schema name provided from python is referenced in 'search_path'.-- SQL script to view contents of search_path. If the schema mentioned in the python script isn't listed, then you need to reference it.show search_path-- SQL script to include a schema to search_path. Note: this overwrites existing schema names in the list,set search_path to replace_with_schema_name;-- In order to append a new entry to existing search_path. Assuming the existing default values are '$user',publicset search_path to '$user',public, replace_with_schema_nameYou can find more on search_path herehttps://docs.aws.amazon.com/redshift/latest/dg/r_search_path.htmlThe other possibilities could beCan you confirm if commit() is called after the table creation.Ensure you are looking at the right aws region, cluster and database name the table was created under.Make sure the logged in user has necessary permissions to view the table.Querying the system table, will return rows referencing schema name, if the table creation was successful. Please ensure the 'search_path' contains the schema name, referenced in your python code, else no records will be returned.SELECT * FROM SVV_TABLE_INFO where "table" like '%replace_with_your_table_name%';SELECT * FROM PG_TABLE_DEF where tablename like '%replace_with_your_table_name%';You can find more details on Create table syntax herehttps://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_NEW.htmlCommentShareAJPanswered a year agoAbdul a year agoThanks AJP for your very detailed answer. I will try out your suggestions and let you know how it goes.ShareAbdul a year agoThanks so much for the help AJP. A missing commit() was indeed the problem, or part of the problem. Prior to noticing the missing commit(), I had tried updating the search_path but it wouldn't update because the cluster was using the default parameter group. So I created a new one. The problem was still occuring after rebooting the cluster. It was then I noticed your comment about the commit(). Thanks so much for your help.ShareAJP a year agoGlad to hear commit was the missing piece of the puzzle. Much appreciate if you can accept the response as answer.Share"
"Hi folks, Hope you are well and safe and also hope that this question doesn't gonna be an already asked question.I'm here to ask a help about a possible solution with AWS Lambda and other AWS environment stuffs: I need to understand if it is possible to use AWS Lambda function as bridge between HTTP request and MQTT topic.In particular, I need where and HTTP request trigger an AWS Lambda function, which in turn publish on a topic; when datas are published on this topic, an IoT thing response to the request, posting response on a topic.I require AWS Lambda to receive this event and send back to the first HTTP request, the response received from IoT thing.Is it possible to achieve this kind of synchronization mechanism similar to the one shown here where, instead of a DynamoDB there is a pub/sub mechanism?Cheers.FollowComment"
AWS Lambda function as bridge between HTTP and MQTT
https://repost.aws/questions/QUk_LUXfggSYurE6sEgstiow/aws-lambda-function-as-bridge-between-http-and-mqtt
false
"1The trick to this would be responding back on the original HTTP connection. As David Katz point out, it is pretty easy to write a lambda function that is triggered from API Gateway to publish to an IoT Topic. Not sure your language of choice, but for example if you are using python, use boto3 library, create an iot client, then use the publish() function to publish your message.Subscribing to a topic is more tricky and will be time consuming for a lambda function. I think you have two possibilities for a solution.First, you would have to spin up a MQTT connection in your lambda, subscribe to the response topic, and then wait for an answer. Again, as an example, in python you can use AWSIoTPythonSDK to connect to your MQTT broker, and then subscribe to your topic. Once the message is received, then your lambda may respond back on the original HTTP connection.Alternatively, create an IoT Rule that listens for your IoT response topic. The IoT rule sends the message to either a SQS Queue or Dynamo table. Your lambda function can then poll the queue or dynamo table for the message. Read the message and send it back on the HTTP connection.CommentShareErikanswered a year agom_piffari a year agoHi Erik, thanks for your reply.So, are you saying that it is possible for an AWS Lambda Function to subscribe to a topic and wait for response, keeping alive the HTTP request?Cause I didn't find any solution that allow me to subscribe to topic on AWS Lambda function - publish on topic and wait for response of first subscription (all while a single HTTP request is kept alive).ShareErik a year agom_piffari, Please see my updated answer. In fairness I have not tried either solution in real life. Both are doable. Personally, I would go with the alternative approach, using an IoT Rule and SQS Queue. As importing a MQTT client into a lambda seems a bit overkill.Sharem_piffari a year agoErik, thank for point out an alternative approach.I've tried out the second approach: instead of using an SQS or a DynamoDB I've used an IoT Rule in order to trigger directly the AWS Lambda when something is published on a topic.However, when the IoT Rule is triggered, the initial HTTP request is gone away, and I'm not able to response to it anyway.Maybe, do you know how can I integrate the AWSIoTPythonSDK (that you linked in original answer) in AWS Lambda? Where I can find some examples?Share1Requests: Publishing to MQTT from lambda works no problem - and triggering the lambda from AWS API GW is obviously simple.Responses: If you use AWS IoT, you can define a rule to trigger a lambda for the response: https://docs.aws.amazon.com/lambda/latest/dg/services-iot.htmlif you are not using AWS IoT, I'd assume you would have to run a compute instance to receive incoming messages via an MQTT client and then invoke lambda yourself - perhaps with SQS or Kinesis in the middle.CommentShareDavid Katzanswered a year agom_piffari a year agoHi David, thanks for your reply.My problem is related to "you can define a rule to trigger a lambda for the response": I've already linked a AWS IoT rule (with SQL query) to a lambda. The problem actually is to return a response to the HTTP connection that firstly start up all process. Hope that could help clarify the concept.Share"
"Hi,I can not delete my ACM certificate because it is associated with some load balancer. I just removed all custom domain names in API Gateway, but it still shows that there are three elastic load balancers associated.Can anyone help me?Best, CuongFollowComment"
Can't Delete Certificate after removing all Custom Domain Name in API Gateway
https://repost.aws/questions/QUc75lbcW5QnWrsp91YgHPtg/can-t-delete-certificate-after-removing-all-custom-domain-name-in-api-gateway
false
"0Hi,As described here https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-resources/ you can try to delete the custom domain name for API Gateway, by running the AWS command delete-domain-name, so that it may clean up ACM associations too.Hope it helpsCommentShareEXPERTalatechanswered 4 months ago"
"Ive a repo with couple of images. inspector2 generates findings only for the first image, for the rest I only see a "No scan findings" message in ECR. I can hardly imagine that only a single image has any issues as all the images are the earlier builds of that image and for sure they have vulnerabilities. is there any way to find out if those images are actually scanned?FollowComment"
Inspector2 ecr scanning
https://repost.aws/questions/QUmt8cgNIJQ06TGs4yuY9TEw/inspector2-ecr-scanning
false
"1btw if someone from AWS actually reads this, would be nice to display this on the UI somehow. eg image is not scanned due to its age or somthing like this. if I goto a repo with old images, I see a No findings to display message. this suggest that my images are OK, but in reality they were not scanned at all :)CommentSharefpganswered a year agoChris_R a year agoThanks for the feedback. I'm happy to raise this request to our service teams to add a UI message.Share0As far as I've tried, if the repository is subject to continuous scanning and the image is within 30 days of being pushed, Inspector will automatically scan it.For more information about the 30-day limit, please refer to the following documenthttps://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-enhanced.htmlCommentSharehayao-kanswered a year agofpg a year agoah, forgot about the 30 day limit, many thanksShare0You might want to try pushing a trivial change to the repo to see if Inspector v2 continuous scanning picks it up and scans the repo.CommentShareklarsonanswered a year agofpg a year agosure, hayao-k was correct, in our case images were older then 30 days.Share"
"When a Transit Gateway (TGW) is created, either in CloudFormation or in the console, several properties can be configured, specifically:AutoAcceptSharedAttachmentsDefaultRouteTableAssociationDefaultRouteTablePropagationDescriptionDnsSupportVpnEcmpSupportOnce the TGW is created, Is there a way for a user to change these properties after creation without having to re-create the TGW again?FollowComment"
Modify Transit Gateway Properties After Creation
https://repost.aws/questions/QU34x4dVdFQHusx4Vvwl-BQw/modify-transit-gateway-properties-after-creation
true
"0Accepted AnswerTransit Gateway (TGW) allow the modification of certain properties after the creation.When you modify a transit gateway, the modified options are applied to new transit gateway attachments only. Your existing transit gateway attachments are not modified.Refer to Modify a transit gateway in the user guide or the AWS CLI/API docs for options that can be modified after creation.CommentShareEXPERTRizwan_Manswered 4 years ago"
If I create a cloufront distribution with a function url as the origin how much will I be charged for egress between the function url and cloudfront?FollowComment
How does data transfer pricing work between lambda function urls and cloudfront?
https://repost.aws/questions/QUOSnxMfbwTMCyeuzRMVFMHA/how-does-data-transfer-pricing-work-between-lambda-function-urls-and-cloudfront
false
"0Based on the Lambda pricing page: Data transferred “in” to and “out” of your AWS Lambda functions, from outside the region the function executed, will be charged at the Amazon EC2 data transfer rates as listed under "Data transfer".In the EC2 pricing page: Data Transfer OUT From Amazon EC2 To Amazon CloudFront $0.00 per GB.So there is no charge for the data transferred from Lambda URLs to CloudFront.CommentShareEXPERTUrianswered a year ago"
"I am an engineer designing a product integration with AWS Marketplace. I'm finding some of the documentation around what would pass testing in order for the product to be published a bit vague. It seems testing can take a while, so I'd like to minimize the back and forth during that phase by getting clarity now."Customers must be able to see the status of their subscription within the SaaS application, including any relevant contract or subscription usage information."quoted from https://docs.aws.amazon.com/marketplace/latest/userguide/saas-guidelines.htmlWhat constitutes relevant contract information? Which pieces of information are we required to display? Start date, expiration date, entitlement level?Related question, what does it mean to "correctly handle" notifications around entitlements and subscriptions? Are we required to quickly reflect back to the customer any changes within the product? Are we required to make behavioral access changes to the product, depending on the kind of notification?Thanks in advanceFollowComment"
​AWS Marketplace SaaS Product Integration Testing - viewing subscription status
https://repost.aws/questions/QUWMZS1M1UQXWP-Tfki3enNg/aws-marketplace-saa-s-product-integration-testing-viewing-subscription-status
false
"01/"Which pieces of information are we required to display? Start date, expiration date, entitlement level?" -- Yes, but it depends on the product that you are selling. In general information related to the product purchase such as how long that specific contract dimension purchase will be and whether there will be a usage based charge.2/"What does it mean to "correctly handle" notifications around entitlements and subscriptions? Are we required to quickly reflect back to the customer any changes within the product? Are we required to make behavioral access changes to the product, depending on the kind of notification?" -- This is more about our two SaaS APIs (entitlements and subscriptions ) handling depending on the SaaS pricing models. The SaaS pricing model determines which APIs should be called:SaaS contracts – GetEntitlements in the AWS Marketplace Entitlement Service.SaaS contracts with consumption – GetEntitlements in the AWS Marketplace Entitlement Service and BatchMeterUsage in the AWS Marketplace Metering Service.SaaS subscriptions – BatchMeterUsage in the AWS Marketplace Metering Service.CommentShareJCanswered 2 months agoJT 2 months agoThanks for your answer. It sounds like there are no requirements to do anything with the information in the notifications other than call the APIs, just prove that we're calling the APIs?Share"
"We need to use OKTA as the entry point for users to gain access to CloudWatch dashboards, since we do not want to create new AWS accounts to allow users to use themFollowComment"
Share your Amazon CloudWatch Dashboards with anyone using OKTA
https://repost.aws/questions/QUt1IX_PBeQd-ft9udOkYdRg/share-your-amazon-cloudwatch-dashboards-with-anyone-using-okta
false
"0You can share CloudWatch Dashboards in three ways:Share a single dashboard and designate specific email addresses and passwords of the people who can view the dashboard.Share a single dashboard publicly, so that anyone who has the link can view the dashboard.Share all the CloudWatch dashboards in your account and specify a third-party single sign-on (SSO) provider for dashboard access. All users who are members of this SSO provider's list can access the dashboards in the account. To enable this, you integrate the SSO provider with Amazon Cognito.Here's an in depth breakdown of each. Your use case sounds like bullet number three. Like it says you do need to integrate the SSO provider, Okta in your case, with Amazon Cognito. Here's instructions on how to do that. Hope this helps!CommentShareAWSJoeanswered a year ago"
"hi , i have tried using this 2 types of approaches to retrieve temporary credentials from AWS account , getting the same error as shown in the screenshot . Please let me knw if there are any better approaches or if any fix for the error , Thank you.// 1st apporachAssumeRoleRequest request = new AssumeRoleRequest(); request.RoleArn = "arn:aws:iam::532634566192:role/ap-redshift"; request.RoleSessionName = "newsessionanme"; client = new AmazonSecurityTokenServiceClient(); AssumeRoleResponse resp = client.AssumeRole(request); Console.WriteLine(resp.Credentials); Console.ReadLine();// 2nd approachclient = new AmazonSecurityTokenServiceClient(); var response = client.AssumeRole(new AssumeRoleRequest { RoleArn = "arn:aws:iam::532634566192:role/ap-redshift", RoleSessionName = "newsessionanme" }); AssumedRoleUser assumedRoleUser = response.AssumedRoleUser; Credentials credentials = response.Credentials;This is the error i am getting "Unable to get IAM security credentials from EC2 Instance Metadata Service.'" as also shown in the picture .FollowComment"
"while trying to retrieve the temporary credentials from Amazon using AWS SDK , i am facing this issue."
https://repost.aws/questions/QUDQRrPBITRhmdH2rejcYYrA/while-trying-to-retrieve-the-temporary-credentials-from-amazon-using-aws-sdk-i-am-facing-this-issue
false
"I have an NodeJS Express API running on Lambda and I can successfully access its endpoints through API Gateway. I'm looking to use this as a backend for a Unity game. I've been reading both GameKit and GameSparks docs and it seems to me that GameSparks it's more suit for connecting to an already existing Lambda app. For what I've learned, to do that with GameKit I would need to create custom features and that would include some C++ coding. Am I in the right thought process?Thanks a lot.FollowComment"
Existing Lambda app - GameKit or GameSparks?
https://repost.aws/questions/QUY_LZEDCcT_-jjpkJIn7dvA/existing-lambda-app-gamekit-or-gamesparks
false
"Hi,My S3 has disappeared from the AWS UI. I was in the middle of transferring it between buckets in the same account and my broadband dropped and the service interrupted. I now can't find it at all. Asking the accounts team they said they could see the bucket but I have no idea where its disappeared too. Checking the logs it doesn't seem like the bucket was deleted either.FollowComment"
S3 Bucket Disappeared While Transferring into another Bucket in same account
https://repost.aws/questions/QU907vTre6SGCQSaBpizRnsw/s3-bucket-disappeared-while-transferring-into-another-bucket-in-same-account
false
"0It could be related to permissions. It is possible that someone modified the IAM policy that is applicable to you or has implemented a bucket policy and you no longer can list the buckets. Check with your account owner regards your permissions.CommentShareSandeepVudataanswered a year ago0I'm an admin so I doubt its a permissions issue. But thanks for the thought, it is a pretty common thing to overlookCommentShareGarrusanswered a year ago"
"I was able to include the DatabricksJDBC42.jar in my Glue Docker container used for local machine development (link).I am able to reach the host using Jupyter notebook, but I am getting an SSL type errorPy4JJavaError: An error occurred while calling o80.load. : java.sql.SQLException: [Databricks][DatabricksJDBCDriver](500593) Communication link failure. Failed to connect to server. Reason: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target.My connection string looks like this:.option("url","jdbc:databricks://host.cloud.databricks.com:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/111111111111111/1111-111111-abcdefghi;AuthMech=3;UseNativeQuery=0;StripCatalogName=0;")\ .option("dbtable","select 1")\ .option("driver", "com.databricks.client.jdbc.Driver")\ .load()I used the same JDBC string in my code uploaded to our live account and the AWS Glue job runs and executes the queries in dbtable just fine. Its just in the local Docker Glue development container where we get this SSL error.I tried adding a separate option for the sslConnection and sslCertLocation and placed tried the files in /root/,aws as well as the jupyter notebook folder. The cert is showing in directory listings and is correctly assigned but the jdbc connection is failing with the SSL error.Anyone see this before or have a suggestion for next steps?Thanks.FollowComment"
How to fix: AWS Docker container for Glue and Databricks JDBC connection - SSL PKIX path building failed?
https://repost.aws/questions/QU_ZelpaZeQxifaAXAyS32HA/how-to-fix-aws-docker-container-for-glue-and-databricks-jdbc-connection-ssl-pkix-path-building-failed
false
"0Hello,The SSLHandshakeException error you are seeing typically occurs when the SSL certificate presented by the server is not trusted by the client. Since the the JDBC connection works in the AWS Glue cloud environment but not in your local development Docker container, it's possible that the difference in behavior is due to differences in the networking and security setup between the two environments. Here are a few suggestions to help you troubleshoot this issue:Check if there are any firewall rules or network restrictions that could be blocking the connection from your local development environment. For example, if you are behind a corporate firewall, it may be necessary to configure the firewall to allow outbound connections to the Databricks host and port.Verify that the SSL certificate presented by the Databricks host is trusted by your local development environment. You can do this by checking the truststore used by the JVM running in your Docker container. You may also want to check if there are any differences in the SSL/TLS configuration between the Glue cloud environment and your local development environment.Check if the version of the JDBC driver used in your local development environment is the same as the one used in the Glue cloud environment. If there are any differences in the driver version, it's possible that there could be compatibility issues.Additionally, you could try setting the "sslTrustStore" and "sslTrustStorePassword" options in your JDBC connection string to point to the location of the truststore and the password to access it, respectively.CommentShareSUPPORT ENGINEERNitin_Sanswered 3 months agoGonzalo Herreros 3 months agoI think that specific error can only be caused by 2, it's possible that the cacert (Certificate Authorities certs) than the container brings are not up to date. Maybe you want to try with the new Glue 4 docker image, otherwise you would have to add a truststore with the DataBricks cert but that's not easy if you have never done it.SharerePost-User-4513773 3 months agoHi Nitin and Gonzalo, thanks for your responses. I thought the same and retrieved the certificate from the host and tried to see if I could add that to docker container but it seemed to be missing some tools and I couldnt figure out how to install them, yet. I have added certs to a truststore before but not in a docker container with a pared down selection of tools and ability to install/use root.I will def. try Glue 4 docker image, and cont. to see if I can get the cert added. Thanks for providing some avenues to move forward! :)ShareGonzalo Herreros 3 months agoYou can root into a container e.g. "docker exec -u root -ti glue_pyspark /bin/bash" but to make the changes permanent would need to update the imageShare0I tried glue 4.0 container image, but it didn't work out of the box. The updates to the container are very nice though--I haven't tried the debug options, but looks great!I tried to install the certs but faced a challenge because update-ca-certificates command is not available. I tried logging in as root and installing it, but faced different SSL errors. It might be an uphill battle for me and it would be less painful to push changes in and test, rather than wrestle with this.Thanks for the help, and I look forward to working in the new glue 4.0 docker container.CommentSharerePost-User-4513773answered 3 months ago"
"Traceback (most recent call last):File "PATHTOFILE", line 10, in <module>shard_it = kinesis.get_shard_iterator("newstream", shard_id, 'AT_TIMESTAMP', timestamp)["ShardIterator"]File "C:\Python39\lib\site-packages\boto\kinesis\layer1.py", line 425, in get_shard_iteratorreturn self.make_request(action='GetShardIterator',File "C:\Python39\lib\site-packages\boto\kinesis\layer1.py", line 877, in make_requestraise exception_class(response.status, response.reason,boto.exception.JSONResponseError: JSONResponseError: 400 Bad Request{'__type': 'SerializationException', 'Message': 'class com.amazon.coral.value.json.numbers.TruncatingBigNumber can not be converted to an String'}I am making a python app that will collect data from kinesis and write it to a file. I want to read data after a certain period of time. so I am using the "AT_TIMESTAMP" in get_shard_iterator() method.This function will be implemented in lambda.FollowComment"
TruncatingBigNumber can not be converted to an String
https://repost.aws/questions/QUz-qYPhLZQHCz4lExbfEknw/truncatingbignumber-can-not-be-converted-to-an-string
false
0Upgrading to Boto3 solved the issueCommentShareShamnaniJimmyanswered 2 years ago
"Is it possible to use VPC Lattice with ECS Service using ECS Service Connect? It seems that the only possible solution is to use load balancer, but that seems costly.FollowComment"
Using VPC Lattice with ECS Service
https://repost.aws/questions/QUfBBhHuGpRTGMSsGMlgUbvQ/using-vpc-lattice-with-ecs-service
false
"0Yes, it's possible. To use ECS Service Connect with VPC Lattice, you would need to create a VPC Lattice service and configure it to use the ECS Service Connect integration.CommentShareUsman Ahmadanswered a month agosimon a month agocan you maybe provide links to some documentation for using ECS Service Connect with VPC Lattice?Share0Using a load balancer with VPC Lattice is a standard common solution, but it is not the only solution. The cost of using a load balancer will depend on your specific use case and the size of your infrastructure. Sometimes, using ECS Service Connect may be a more cost-effective solution.CommentShareUsman Ahmadanswered a month ago0At this point, you can configure Amazon ECS tasks in VPC Lattice as a service with a workaround and have an ALB/NLB as a target. There is a roadmap item to integrate Amazon ECS and VPC Lattice with advanced features like Authentication and AuthorizationCommentSharePiyushMattooanswered a month ago"
"I have a Prod and Dev AWS account, and I would like to bring in the live data within the DEV AWS account. The live data is connected to the PROD account via the following path (OPC-UA server -> Greengrass V2 SiteWise Gateway (running on an EC2) -> SiteWise Console).So the question: is it possible to create a new gateway in SiteWise ( in DEV account) and deploy it on the SAME server that's running the existing gateway for PROD?FollowComment"
is it possible to deploy two (2) SiteWise gateway configuration on single EC2 machine?
https://repost.aws/questions/QUjuXDkiXdSvWZ9mSgYN-U2A/is-it-possible-to-deploy-two-2-sitewise-gateway-configuration-on-single-ec2-machine
false
"1Hi Roshan, it is possible to do what you want by installing a new Greengrass Core instance on the same server in a different root folder. From the Sitewise console, select Create Gateway and go through the Default Setup. Once the installation script has been dowloaded, open it and edit the line withGREENGRASS_FOLDER="/greengrass"replacing "/greengrass" with another folder.If you are installing on Windows, just pass the new folder name as -InstallPath parameter to the scriptNOTE: This will not work if you are also installing the data processing pack, as this pack will open some ports that will conflict with those opened by the existing gateway.CommentShareEXPERTMassimilianoAWSanswered 9 months agorePost-User-Roshan 9 months agoThanks for the response. I will try this out at the earliest.Share"
"Describe the ErrorRunning IDT shows error in cloudcomponent logthe Greengrass deployment is COMPLETED on the device after 180 secondscomes up with2023-Mar-29 10:22:22,494 [cloudComponent] [idt-c057b1fc3d6bc618a399] [ERROR] greengrass/features/cloudComponent.feature - Failed at step: 'the Greengrass deployment is COMPLETED on the device after 180 seconds'java.lang.IllegalStateException: Deployment idt-c057b1fc3d6bc618a399-gg-deployment did not reach COMPLETED at com.aws.greengrass.testing.features.DeploymentSteps.deploymentSucceeds(DeploymentSteps.java:311) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at ✽.the Greengrass deployment is COMPLETED on the device after 180 seconds(classpath:greengrass/features/cloudComponent.feature:26) ~[?:?]2023-Mar-29 10:22:22,510 [cloudComponent] [idt-c057b1fc3d6bc618a399] [INFO] greengrass/features/cloudComponent.feature - Finished step: 'the com.aws.HelloWorld log on the device contains the line "Hello World!!" within 20 seconds' with status SKIPPED2023-03-28T05:33:39.019Z [ERROR] (Copier) com.aws.greengrass.util.orchestration.SystemdUtils: systemd-setup. {stderr=Created symlink /etc/systemd/system/multi-user.target.wants/greengrass.service → /etc/systemd/system/greengrass.service., command=systemctl enable greengrass.service}it seems like the mqtt test also have same error in the log after a couple times of trying not sure the problem related to the errorbut some how the mqtt test pass./mqtt/mqttpubsub/greengrass_2023_03_29_01_0.log:2023-03-29T01:59:34.313Z [ERROR] (Copier) com.aws.greengrass.util.orchestration.SystemdUtils: systemd-setup. {stderr=Created symlink /etc/systemd/system/multi-user.target.wants/greengrass.service → /etc/systemd/system/greengrass.service., command=systemctl enable greengrass.service}Details:I am tyring to use IDT for greengrass v2 followinghttps://docs.aws.amazon.com/greengrass/v2/developerguide/device-config-setup.htmlby following the official guideafter by launching greengrass withsudo -E java -Droot="/test/greengrass/v2" -Dlog.store=FILE -jar ./GreengrassInstaller/lib/Greengrass.jar --aws-region us-west-2 --thing-name IM30 --thing-group-name GreengrassQuickStartGroup_test --component-default-user root:root --provision true --setup-system-service true --deploy-dev-tools trueit shows"Successfully set up Nucleus as a system service"while watching systemctl status greengrass.serviceroot@i350-evk:/test/greengrass/v2# systemctl status greengrass.service● greengrass.service - Greengrass Core Loaded: loaded (/etc/systemd/system/greengrass.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2023-03-29 02:46:35 UTC; 16s ago Main PID: 11178 (sh) Tasks: 68 (limit: 3443) Memory: 277.0M CGroup: /system.slice/greengrass.service ├─11178 /bin/sh /test/greengrass/v2/alts/current/distro/bin/loader └─11182 java -Dlog.store=FILE -Dlog.store=FILE -Droot=/test/greengrass/v2 -jar /test/greengrass/v2/alts/current/distro/lib/Greengrass.jar --setup-system-service falseMar 29 02:46:35 i350-evk systemd[1]: Started Greengrass Core.Mar 29 02:46:35 i350-evk sh[11178]: Greengrass root: /test/greengrass/v2Mar 29 02:46:35 i350-evk sh[11178]: Java executable: javaMar 29 02:46:35 i350-evk sh[11178]: JVM options: -Dlog.store=FILE -Droot=/test/greengrass/v2Mar 29 02:46:35 i350-evk sh[11178]: Nucleus options: --setup-system-service falseMar 29 02:46:39 i350-evk sh[11182]: Launching Nucleus...Mar 29 02:46:46 i350-evk sh[11182]: Launched Nucleus successfully.my effectiveConfig.yaml :---system: certificateFilePath: "/test/greengrass/v2/thingCert.crt" privateKeyPath: "/test/greengrass/v2/privKey.key" rootCaPath: "/test/greengrass/v2/rootCA.pem" rootpath: "/test/greengrass/v2" thingName: "IM30"services: aws.greengrass.LocalHelloWorld: componentType: "GENERIC" configuration: {} dependencies: [] lifecycle: Run: "java -DcomponentName=\"HelloWorld\" -jar /test/greengrass/v2/packages/artifacts/aws.greengrass.LocalHelloWorld/1.0.0/cloudcomponent.jar" version: "1.0.0" aws.greengrass.Nucleus: componentType: "NUCLEUS" configuration: awsRegion: "us-west-2" componentStoreMaxSizeBytes: "10000000000" deploymentPollingFrequencySeconds: "15" envStage: "prod" fleetStatus: periodicStatusPublishIntervalSeconds: 86400 greengrassDataPlaneEndpoint: "" greengrassDataPlanePort: "8443" httpClient: {} interpolateComponentConfiguration: false iotCredEndpoint: "xxx.credentials.iot.us-west-2.amazonaws.com" iotDataEndpoint: "xxx-ats.iot.us-west-2.amazonaws.com" iotRoleAlias: "GreengrassV2TokenExchangeRoleAlias" jvmOptions: "-Dlog.store=FILE" logging: {} mqtt: spooler: {} networkProxy: proxy: {} platformOverride: {} runWithDefault: posixShell: "sh" posixUser: "root:root" telemetry: {} dependencies: [] lifecycle: bootstrap: requiresPrivilege: "true" script: "\nset -eu\nKERNEL_ROOT=\"/test/greengrass/v2\"\nUNPACK_DIR=\"/test/greengrass/v2/packages/artifacts-unarchived/aws.greengrass.Nucleus/2.9.4/aws.greengrass.nucleus\"\ \nrm -r \"$KERNEL_ROOT\"/alts/current/*\necho \"-Dlog.store=FILE\" > \"\ $KERNEL_ROOT/alts/current/launch.params\"\nln -sf \"$UNPACK_DIR\" \"$KERNEL_ROOT/alts/current/distro\"\ \nexit 100" version: "2.9.4" DeploymentService: ComponentToGroups: aws.greengrass.LocalHelloWorld: "4e69ead5-a595-4b96-a7c8-45da74475fe0": "LOCAL_DEPLOYMENT" dependencies: [] GroupToLastDeployment: LOCAL_DEPLOYMENT: configArn: null timestamp: 1679992938275 thing/IM30: configArn: "arn:aws:greengrass:us-west-2:116407744508:configuration:thing/IM30:111" timestamp: 1679993636499 thinggroup/GreengrassQuickStartGroup_test: configArn: "arn:aws:greengrass:us-west-2:116407744508:configuration:thinggroup/GreengrassQuickStartGroup_test:51" timestamp: 1679993661994 GroupToRootComponents: LOCAL_DEPLOYMENT: aws.greengrass.LocalHelloWorld: groupConfigArn: "4e69ead5-a595-4b96-a7c8-45da74475fe0" groupConfigName: "LOCAL_DEPLOYMENT" version: "1.0.0" thing/IM30: {} thinggroup/GreengrassQuickStartGroup_test: {} runtime: ProcessedDeployments: "1679993661353": ConfigurationArn: "arn:aws:greengrass:us-west-2:116407744508:configuration:thing/IM30:111" DeploymentId: "arn:aws:greengrass:us-west-2:116407744508:configuration:thing/IM30:111" DeploymentRootPackages: [] DeploymentStatus: "SUCCEEDED" DeploymentStatusDetails: detailed-deployment-status: "SUCCESSFUL" DeploymentType: "SHADOW" GreengrassDeploymentId: "ef974963-4dfe-49a4-a1fc-73c8aa49a545" version: "0.0.0" FleetStatusService: dependencies: [] lastPeriodicUpdateTime: 1679981624527 sequenceNumber: 122 version: "0.0.0" main: dependencies: - "FleetStatusService:HARD" - "DeploymentService:HARD" - "aws.greengrass.LocalHelloWorld" - "TelemetryAgent:HARD" - "aws.greengrass.Nucleus" - "UpdateSystemPolicyService:HARD" - "aws.greengrass.Nucleus" lifecycle: {} runtime: service-digest: {} TelemetryAgent: dependencies: [] runtime: lastPeriodicAggregationMetricsTime: 1679981624784 lastPeriodicPublishMetricsTime: 1679981624784 version: "0.0.0" UpdateSystemPolicyService: dependencies: [] version: "0.0.0"however while I launch the deployment of idt with command:./devicetester_linux_x86-64 run-suite --suite-id GGV2Q_2.5.0 --userdate userdata.jsonthe test_manager.log in idt shows[ERROR] [2023-03-28 16:56:12]: Test exited unsuccessfully testCaseId=cloudcomponent error=exit status 1 executionId=9f12b10c-cd42-11ed-bb24-080027641c32[INFO] [2023-03-28 16:56:12]: All tests finished. executionId=9f12b10c-cd42-11ed-bb24-080027641c32[INFO] [2023-03-28 16:56:13]:========== Test Summary ==========Execution Time: 26m50sTests Completed: 7Tests Passed: 6Tests Failed: 1Tests Skipped: 0----------------------------------Test Groups: lambdadeployment: PASSED mqtt: PASSED component: FAILED pretestvalidation: PASSED coredependencies: PASSED version: PASSED----------------------------------Failed Tests: Group Name: component Test Name: cloudcomponent Reason: Failed at 'the Greengrass deployment is COMPLETED on the device after 180 seconds'----------------------------------Path to AWS IoT Device Tester Report: /home/user/test/devicetester_greengrass_v2_linux/results/20230328T162913/awsiotdevicetester_report.xmlPath to Test Execution Logs: /home/user/test/devicetester_greengrass_v2_linux/results/20230328T162913/logsPath to Aggregated JUnit Report: /home/user/test/devicetester_greengrass_v2_linux/results/20230328T162913/GGV2Q_Report.xml==================================logs/component/cloudcomponentgreengrass_2023_03_28_05_0.log shows:2023-03-28T05:33:10.582Z [INFO] (main) com.aws.greengrass.deployment.DeviceConfiguration: Copy Nucleus artifacts to component store. {destination=/test/greengrass/v2/packages/artifacts-unarchived/aws.greengrass.Nucleus/2.9.4/aws.greengrass.nucleus, source=/test/GreengrassInstaller}2023-03-28T05:33:37.517Z [INFO] (main) com.aws.greengrass.lifecyclemanager.KernelLifecycle: Waiting for services to shutdown. {}2023-03-28T05:33:37.543Z [INFO] (main) com.aws.greengrass.lifecyclemanager.Kernel: effective-config-dump-complete. {file=/test/greengrass/v2/config/effectiveConfig.yaml}2023-03-28T05:33:39.019Z [ERROR] (Copier) com.aws.greengrass.util.orchestration.SystemdUtils: systemd-setup. {stderr=Created symlink /etc/systemd/system/multi-user.target.wants/greengrass.service → /etc/systemd/system/greengrass.service., command=systemctl enable greengrass.service}2023-03-28T05:33:39.889Z [INFO] (main) com.aws.greengrass.util.orchestration.SystemdUtils: systemd-setup. Successfully set up systemd service. {}2023-03-28T05:33:39.891Z [INFO] (main) com.aws.greengrass.lifecyclemanager.KernelLifecycle: system-shutdown. {main=null}2023-03-28T05:33:39.902Z [INFO] (main) com.aws.greengrass.lifecyclemanager.KernelLifecycle: Waiting for services to shutdown. {}2023-03-28T05:33:39.933Z [INFO] (main) com.aws.greengrass.lifecyclemanager.Kernel: effective-config-dump-complete. {file=/test/greengrass/v2/config/effectiveConfig.yaml}2023-03-28T05:33:39.937Z [INFO] (Serialized listener processor) com.aws.greengrass.lifecyclemanager.KernelLifecycle: executor-service-shutdown-initiated. {}2023-03-28T05:33:39.938Z [INFO] (main) com.aws.greengrass.lifecyclemanager.KernelLifecycle: Waiting for executors to shutdown. {}2023-03-28T05:33:39.940Z [INFO] (main) com.aws.greengrass.lifecyclemanager.KernelLifecycle: executor-service-shutdown-complete. {executor-terminated=true, scheduled-executor-terminated=true}2023-03-28T05:33:39.941Z [INFO] (main) com.aws.greengrass.lifecyclemanager.KernelLifecycle: context-shutdown-initiated. {}2023-03-28T05:33:39.945Z [INFO] (main) com.aws.greengrass.lifecyclemanager.KernelLifecycle: context-shutdown-complete. {}greengrass-test-run.log shows:2023-Mar-29 10:19:21,514 [cloudComponent] [idt-c057b1fc3d6bc618a399] [INFO] greengrass/features/cloudComponent.feature - Finished step: 'I create a Greengrass deployment with components' with status PASSED2023-Mar-29 10:19:22,164 [cloudComponent] [idt-c057b1fc3d6bc618a399] [INFO] com.aws.greengrass.testing.resources.AbstractAWSResourceLifecycle - Created GreengrassDeployment in GreengrassV2Lifecycle2023-Mar-29 10:19:22,165 [cloudComponent] [idt-c057b1fc3d6bc618a399] [INFO] com.aws.greengrass.testing.features.DeploymentSteps - Created Greengrass deployment: fe860f32-3236-4493-b71d-56e585382c0c2023-Mar-29 10:19:22,166 [cloudComponent] [idt-c057b1fc3d6bc618a399] [INFO] greengrass/features/cloudComponent.feature - Finished step: 'I deploy the Greengrass deployment configuration to thing group' with status PASSED2023-Mar-29 10:22:22,494 [cloudComponent] [idt-c057b1fc3d6bc618a399] [ERROR] greengrass/features/cloudComponent.feature - Failed at step: 'the Greengrass deployment is COMPLETED on the device after 180 seconds'java.lang.IllegalStateException: Deployment idt-c057b1fc3d6bc618a399-gg-deployment did not reach COMPLETED at com.aws.greengrass.testing.features.DeploymentSteps.deploymentSucceeds(DeploymentSteps.java:311) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at ✽.the Greengrass deployment is COMPLETED on the device after 180 seconds(classpath:greengrass/features/cloudComponent.feature:26) ~[?:?]2023-Mar-29 10:22:22,510 [cloudComponent] [idt-c057b1fc3d6bc618a399] [INFO] greengrass/features/cloudComponent.feature - Finished step: 'the com.aws.HelloWorld log on the device contains the line "Hello World!!" within 20 seconds' with status SKIPPED2023-Mar-29 10:22:22,511 [cloudComponent] [idt-c057b1fc3d6bc618a399] [INFO] greengrass/features/cloudComponent.feature - Finished step: 'I create a Greengrass deployment with components' with status SKIPPED2023-Mar-29 10:22:22,512 [cloudComponent] [idt-c057b1fc3d6bc618a399] [INFO] greengrass/features/cloudComponent.feature - Finished step: 'I deploy the Greengrass deployment configuration to thing group' with status SKIPPED2023-Mar-29 10:22:22,513 [cloudComponent] [idt-c057b1fc3d6bc618a399] [INFO] greengrass/features/cloudComponent.feature - Finished step: 'the Greengrass deployment is COMPLETED on the device after 180 seconds' with status SKIPPED2023-Mar-29 10:22:22,513 [cloudComponent] [idt-c057b1fc3d6bc618a399] [INFO] greengrass/features/cloudComponent.feature - Finished step: 'the com.aws.HelloWorld log on the device contains the line "Hello World Updated!!" within 20 seconds' with status SKIPPEDTo ReproduceSee above.Expected behaviorIDT for greengrass v2 pass all of the testActual behaviorIDT for greengrass v2 failed on cloud component testEnvironmentOS: device for greengrass Yocto linux aarch64device for idt Ubuntu 22.04.1 LTSJDK version:"openjdk version "1.8.0_282"//also have test with amazon-corretto-17.0.6.10.1-linux-aarch64https://docs.aws.amazon.com/corretto/latest/corretto-17-ug/downloads-list.htmlseems the error still happenedNucleus version:2.9.4IDT version:4.7.0Additional contextNotice the same error on https://github.com/aws-greengrass/aws-greengrass-nucleus/issues/876#issuecomment-786948524not sure if it is relatedbut other test PassFollowComment"
(nucleus):IDT 4.7.0 for greengrass v2 cloudcomponent test failed Failed at 'the Greengrass deployment is COMPLETED on the device after 180 seconds
https://repost.aws/questions/QUfFmnIZhqQTa2qcwMpale-A/nucleus-idt-4-7-0-for-greengrass-v2-cloudcomponent-test-failed-failed-at-the-greengrass-deployment-is-completed-on-the-device-after-180-seconds
true
"1Hello @loordb,Looking at your provided information I would suggest restarting greengrass and then re-running the tests again with the timeout-multiplier flag. This will increase the timeout of the tests and make sure what you are running into is not just simply a time issue.To restart greengrass:sudo service greengrass restartTo re-run tests with the timeout-multiplier:./devicetester_linux_x86-64 run-suite --suite-id GGV2Q_2.5.0 --timeout-multiplier 5 --userdata userdata.jsonIf you are still running into an error, can you please post your latest greengrass_YYYY_MM_DD_HH_X.log again along side the deploymentId and the deployment lifecycle phases the device has gone through, and the timestamps for the phase transition.Knowing this information will tell us if you are actually running into an error or if it is just a time issue.Regards.CommentSharejrcarb-AWSanswered 2 months agoloordb 2 months agoHI @jrcarb-AWS I have upload the log below after following the steps you mentions help it can clarify the problem thxShare0Accepted AnswerI think the security credential cause the problem see the answer by the following linkhttps://repost.aws/questions/QU9lwv47QcQHWAE0IdCiy5Ig/can-anyone-help-me-out-with-the-idt-error-problem-the-greengrass-deployment-is-completed-on-the-device-after-180-secondsCommentShareloordbanswered 2 months ago0Hello @jrcarb-AWS after trying the steps you have mentionedI have upload the full log inhttps://github.com/yuchinchenTW/IDT_-forgreengrassv2_log/tree/123/20230330T093130hope it helps to clarify the errorsudo service greengrass restart./devicetester_linux_x86-64 run-suite --suite-id GGV2Q_2.5.0 --timeout-multiplier 5 --userdata userdata.jsonthe test_manager.log shows the same error:10:31:25.930 [otf-1.0.0-SNAPSHOT] [mqtt] [idt-e44384162d7530cffc52] [INFO] com.aws.greengrass.testing.resources.AbstractAWSResourceLifecycle - Removed GreengrassDeployment in GreengrassV2Lifecycle10:31:26.167 [otf-1.0.0-SNAPSHOT] [mqtt] [idt-e44384162d7530cffc52] [INFO] com.aws.greengrass.testing.resources.AbstractAWSResourceLifecycle - Removed GreengrassDeployment in GreengrassV2Lifecycle10:31:26.493 [otf-1.0.0-SNAPSHOT] [mqtt] [idt-e44384162d7530cffc52] [INFO] com.aws.greengrass.testing.resources.AbstractAWSResourceLifecycle - Removed GreengrassComponent in GreengrassV2Lifecycle10:31:26.774 [otf-1.0.0-SNAPSHOT] [mqtt] [idt-e44384162d7530cffc52] [INFO] com.aws.greengrass.testing.resources.AbstractAWSResourceLifecycle - Removed GreengrassComponent in GreengrassV2Lifecycle10:31:27.569 [otf-1.0.0-SNAPSHOT] [mqtt] [idt-e44384162d7530cffc52] [INFO] com.aws.greengrass.testing.resources.AbstractAWSResourceLifecycle - Removed S3Object in S3Lifecycle10:31:28.173 [otf-1.0.0-SNAPSHOT] [mqtt] [idt-e44384162d7530cffc52] [INFO] com.aws.greengrass.testing.resources.AbstractAWSResourceLifecycle - Removed S3Bucket in S3Lifecycle10:31:28.179 [otf-1.0.0-SNAPSHOT] [mqtt] [idt-e44384162d7530cffc52] [INFO] com.aws.greengrass.testing.features.AWSResourcesSteps - Successfully removed externally created resources10:31:28.182 [otf-1.0.0-SNAPSHOT] [mqtt] [idt-e44384162d7530cffc52] [INFO] com.aws.greengrass.testing.features.LoggerSteps - Clearing thread context on scenario: 'Component publishes MQTT message to Iot core and retrieves it as well'10:31:28.186 [] [mqtt] [idt-e44384162d7530cffc52] [INFO] com.aws.greengrass.testing.launcher.reporting.StepTrackingReporting - Passed: 'Component publishes MQTT message to Iot core and retrieves it as well'[INFO] [2023-03-30 10:31:32]: All tests finished. executionId=98b8bcae-ce9a-11ed-8691-080027641c32[INFO] [2023-03-30 10:31:34]:========== Test Summary ==========Execution Time: 59m58sTests Completed: 7Tests Passed: 6Tests Failed: 1Tests Skipped: 0----------------------------------Test Groups: lambdadeployment: PASSED mqtt: PASSED pretestvalidation: PASSED coredependencies: PASSED version: PASSED component: FAILED----------------------------------Failed Tests: Group Name: component Test Name: cloudcomponent Reason: Failed at 'the Greengrass deployment is COMPLETED on the device after 180 seconds'----------------------------------Path to AWS IoT Device Tester Report: /home/user/test/devicetester_greengrass_v2_linux/results/20230330T093130/awsiotdevicetester_report.xmlPath to Test Execution Logs: /home/user/test/devicetester_greengrass_v2_linux/results/20230330T093130/logsPath to Aggregated JUnit Report: /home/user/test/devicetester_greengrass_v2_linux/results/20230330T093130/GGV2Q_Report.xml==================================here is the log you asked not sure if it help because after hours of search seems nothing infom for megreengrass_2023_03_30_01_0.logsee the log in the link due to the log is too long and cannot post in the commenthttps://github.com/yuchinchenTW/IDT_-forgreengrassv2_log/blob/123/20230330T093130/logs/component/cloudcomponent/greengrass_2023_03_30_01_0.logit seems like the error shows the same at greengrass-test-run.log2023-Mar-30 10:01:55,825 [cloudComponent] [idt-63ea438cc5fbbf1a023d] [INFO] com.aws.greengrass.testing.features.DeploymentSteps - Created Greengrass deployment: cd08a43f-4ffe-4849-aebb-de46d9fc5ee92023-Mar-30 10:01:55,826 [cloudComponent] [idt-63ea438cc5fbbf1a023d] [INFO] greengrass/features/cloudComponent.feature - Finished step: 'I deploy the Greengrass deployment configuration to thing group' with status PASSED2023-Mar-30 10:16:56,096 [cloudComponent] [idt-63ea438cc5fbbf1a023d] [ERROR] greengrass/features/cloudComponent.feature - Failed at step: 'the Greengrass deployment is COMPLETED on the device after 180 seconds'java.lang.IllegalStateException: Deployment idt-63ea438cc5fbbf1a023d-gg-deployment did not reach COMPLETED at com.aws.greengrass.testing.features.DeploymentSteps.deploymentSucceeds(DeploymentSteps.java:311) ~[AWSGreengrassV2TestingIDT-1.0.jar:?] at ✽.the Greengrass deployment is COMPLETED on the device after 180 seconds(classpath:greengrass/features/cloudComponent.feature:26) ~[?:?]notice that the idt environment and greengrass device do have the time difference (UTC+8 && UTC+0 ;8 hours diff) but we think that the time differ does not causing the result of the error I mean it should work even if the device is in different placesCommentShareloordbanswered 2 months agojrcarb-AWS 2 months agoHello @loordb,First off, thank you for submitting all of your logs, it was really helpful in debugging the issue. From your cloudcomponent/greengrass.log logs we can see that the component deployment started at around 1:58, and finished deployment successfully at around 2:31. Which means the deployment took around more than 30 mins, so my previous timeout multiplier was not enough. Can you please try again with a timeout multiplier of 12?./devicetester_linux_x86-64 run-suite --suite-id GGV2Q_2.5.0 --timeout-multiplier 12 --userdata userdata.jsonRegards.Shareloordb 2 months agoHello @jrcarb-AWS thx for the early reply, we have tried the command you have mentioned see the result below, seems like the same error still happening, we also have upload the new tesing log to git hope it helpsShare0Hello @jrcarb-AWSafter trying with the command which you have mentioned./devicetester_linux_x86-64 run-suite --suite-id GGV2Q_2.5.0 --timeout-multiplier 12 --userdata userdata.jsonseems like the same error still happenedafter couple times of tryingthe log have upload to the link belowhttps://github.com/yuchinchenTW/IDT_-forgreengrassv2_log/tree/123/20230331T10245212:26:58.321 [otf-1.0.0-SNAPSHOT] [cloudComponent] [idt-07ade2fc27374da68d46] [INFO] com.aws.greengrass.testing.DefaultGreengrass - Leaving Greengrass running on pid: 012:26:58.322 [otf-1.0.0-SNAPSHOT] [cloudComponent] [idt-07ade2fc27374da68d46] [INFO] com.aws.greengrass.testing.features.FileSteps - Stopping Greengrass service..12:27:24.342 [otf-1.0.0-SNAPSHOT] [cloudComponent] [idt-07ade2fc27374da68d46] [INFO] com.aws.greengrass.testing.features.FileSteps - Starting Greengrass service..12:27:24.896 [otf-1.0.0-SNAPSHOT] [cloudComponent] [idt-07ade2fc27374da68d46] [INFO] com.aws.greengrass.testing.resources.AbstractAWSResourceLifecycle - Removed GreengrassDeployment in GreengrassV2Lifecycle12:27:25.215 [otf-1.0.0-SNAPSHOT] [cloudComponent] [idt-07ade2fc27374da68d46] [INFO] com.aws.greengrass.testing.resources.AbstractAWSResourceLifecycle - Removed GreengrassDeployment in GreengrassV2Lifecycle12:27:25.437 [otf-1.0.0-SNAPSHOT] [cloudComponent] [idt-07ade2fc27374da68d46] [INFO] com.aws.greengrass.testing.resources.AbstractAWSResourceLifecycle - Removed GreengrassDeployment in GreengrassV2Lifecycle12:27:25.711 [otf-1.0.0-SNAPSHOT] [cloudComponent] [idt-07ade2fc27374da68d46] [INFO] com.aws.greengrass.testing.resources.AbstractAWSResourceLifecycle - Removed GreengrassComponent in GreengrassV2Lifecycle12:27:26.373 [otf-1.0.0-SNAPSHOT] [cloudComponent] [idt-07ade2fc27374da68d46] [INFO] com.aws.greengrass.testing.resources.AbstractAWSResourceLifecycle - Removed S3Object in S3Lifecycle12:27:26.883 [otf-1.0.0-SNAPSHOT] [cloudComponent] [idt-07ade2fc27374da68d46] [INFO] com.aws.greengrass.testing.resources.AbstractAWSResourceLifecycle - Removed S3Bucket in S3Lifecycle12:27:26.884 [otf-1.0.0-SNAPSHOT] [cloudComponent] [idt-07ade2fc27374da68d46] [INFO] com.aws.greengrass.testing.features.AWSResourcesSteps - Successfully removed externally created resources12:27:26.885 [otf-1.0.0-SNAPSHOT] [cloudComponent] [idt-07ade2fc27374da68d46] [INFO] com.aws.greengrass.testing.features.LoggerSteps - Clearing thread context on scenario: 'As a developer, I can create a component in Cloud and deploy it on my device via thing group'12:27:26.885 [] [cloudComponent] [idt-2e3f3b9d5e6bd3bcce32] [INFO] com.aws.greengrass.testing.launcher.reporting.StepTrackingReporting - Passed: 'As a developer, I can create a component in Cloud and deploy it on my device'12:27:26.886 [] [cloudComponent] [idt-07ade2fc27374da68d46] [ERROR] com.aws.greengrass.testing.launcher.reporting.StepTrackingReporting - Failed: 'As a developer, I can create a component in Cloud and deploy it on my device via thing group': Failed at 'the Greengrass deployment is COMPLETED on the device after 180 seconds'12:27:27.131 [] [] [] [INFO] com.aws.greengrass.testing.modules.AWSResourcesCleanupModule - Cleaned up TestContext{testId=TestId{prefix=idt, id=2e3f3b9d5e6bd3bcce32}, testDirectory=/tmp/gg-testing-6897313184919651577/idt-2e3f3b9d5e6bd3bcce32, testResultsPath=/home/user/test/devicetester_greengrass_v2_linux/results/20230331T102452/logs/component/cloudcomponent, cleanupContext=CleanupContext{persistAWSResources=false, persistInstalledSoftware=true, persistGeneratedFiles=false}, initializationContext=InitializationContext{persistModes=[installed.software], persistAWSResources=false, persistInstalledSoftware=true, persistGeneratedFiles=false}, logLevel=DEBUG, installRoot=/test/greengrass/v2, currentUser=ggc_user, coreThingName=IM30, coreVersion=2.9.4, tesRoleName=GreengrassV2TokenExchangeRole, hsmConfigured=false, trustedPluginsPaths=[]}[ERROR] [2023-03-31 12:27:30]: Test exited unsuccessfully executionId=38162d05-cf6b-11ed-b7ea-080027641c32 testCaseId=cloudcomponent error=exit status 1[INFO] [2023-03-31 12:27:30]: All tests finished. executionId=38162d05-cf6b-11ed-b7ea-080027641c32[INFO] [2023-03-31 12:27:33]: ========== Test Summary ==========Execution Time: 2h2m32sTests Completed: 7Tests Passed: 6Tests Failed: 1Tests Skipped: 0----------------------------------Test Groups: pretestvalidation: PASSED coredependencies: PASSED version: PASSED mqtt: PASSED lambdadeployment: PASSED component: FAILED----------------------------------Failed Tests: Group Name: component Test Name: cloudcomponent Reason: Failed at 'the Greengrass deployment is COMPLETED on the device after 180 seconds'----------------------------------Path to AWS IoT Device Tester Report: /home/user/test/devicetester_greengrass_v2_linux/results/20230331T102452/awsiotdevicetester_report.xmlPath to Test Execution Logs: /home/user/test/devicetester_greengrass_v2_linux/results/20230331T102452/logsPath to Aggregated JUnit Report: /home/user/test/devicetester_greengrass_v2_linux/results/20230331T102452/GGV2Q_Report.xml==================================also upload our config help here hope it helpsconfig.json{ "log": { "location": "../logs/" }, "configFiles": { "root": "../configs", "device": "../configs/device.json" }, "testPath": "../tests/", "reportPath": "../results/", "certificatePath": "../certificates/", "awsRegion": "us-west-2", "auth": { "method": "environment" }}device.json[ { "id": "IM30", "sku": "IM30", "features": [ { "name": "arch", "value": "aarch64" }, { "name": "ml", "value": "no" }, { "name": "docker", "value": "no" }, { "name": "streamManagement", "value": "no" }, { "name": "hsi", "value": "no" } ], "devices": [ { "id": "test", "operatingSystem": "Linux", "connectivity": { "protocol": "ssh", "ip": "192.168.120.29", "port": 22, "auth": { "method": "password", "credentials": { "user": "root", "password": "123" } } } } ] }]CommentShareloordbanswered 2 months agoloordb 2 months agolater on we also have tried ./devicetester_linux_x86-64 run-suite --suite-id GGV2Q_2.5.0 --timeout-multiplier 20 --userdata userdata.json and the log shows the same error . Notice that we disconnect the ssh while after we see the same cloudcomponent error log in the log(because it take too long to test in --timeout-multiplier 20)so it will show the ssh disconnect log at the end17:45:40.765 [otf-1.0.0-SNAPSHOT] [cloudComponent] [idt-4c42025f3cf8ba495b5d] [ERROR] com.aws.greengrass.testing.idt.IDTDevice - Failed to execute a command on null, CommandInput{line=sh, workingDirectory=null, input=null, timeout=null, args=[-c, systemctl stop greengrass.service]}com.amazonaws.iot.idt.exception.IDTServiceException: failed to execute on device: couldn't connect a remote session with error: ssh: unexpected packet in response to channel open: <nil>at com.amazonaws.iot.idt.IDTCredentialsInterceptor.enrich(IDTCredentialsInterceptor.java:73)at com.amazonaws.iot.idt.IDTCredentialsInterceptor.intercept(IDTCredentialsInterceptor.java:56)Before that all of the log seeems the samesee the --timeout-multiplier 20 log in the linkhttps://github.com/yuchinchenTW/IDT_-forgreengrassv2_log/tree/123/20230331T141120/logsShare0HelloBased on the logs, after the timeout multiplier of 20 is applied, the cloud component test indeed went through all the steps without any issue. But, something triggered the retry of the same test, and during the retry the ssh timeout led to the failure.My hypothesis is that, since this is a preinstalled Greengrass running for long time, the accumulated logs are coming in the way of the testcases to run properly. While IDT/Greengrass teams look into hardening the testcases in this particular usecase, I have one suggestion for you to try.Can you please stop Greengrass, remove all the logs in logs folder, and restart the Greengrass installation, before running the tests again, with 20 multiplier.CommentSharesatvemulAWSanswered a month ago"
my question is whether t4g instances are compatible with io2 block express.FollowComment
io2 block express
https://repost.aws/questions/QUbi0jQx58Sjek1HgBUv7ZEA/io2-block-express
false
"1io2 Block Express volumes are supported with C6in, C7g, M6in, M6idn, R5b, R6in, R6idn, Trn1, X2idn, and X2iedn instances.https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/provisioned-iops.html#io2-block-expressCommentSharehayao-kanswered 4 months ago0You can attach an io2 EBS volume to t4g, however you will not get the full features of the EBS volume on instance types that do not fully support BX as mentioned in the previous answer.CommentShareMODERATORphilawsanswered 4 months ago"
"I'm trying to create a service control policy to restrict creating Amazon Workspaces only with encrypted volumes. For example:{"Effect": "Deny","Action": ["workspaces:CreateWorkspaces"],"Condition": {"ForAnyValues:Bool": [ {"workspaces:UserVolumeEncryptionEnabled": "false"}, {"workspaces:RootVolumeEncryptionEnabled": "false"} ]},"Resource": ["*"]}However, the service control policy editor gives me an error: "The provided policy document does not meet the requirements of the specified policy type." Why is this happening?FollowComment"
Using Amazon service control policy to restrict Amazon Workspaces with encypted volumes
https://repost.aws/questions/QUJUlIreSlTES-96eoJN6Lhw/using-amazon-service-control-policy-to-restrict-amazon-workspaces-with-encypted-volumes
true
"0Accepted AnswerAmazon WorkSpaces doesn't have any service level condition keys that you can use with a service control policy. Therefore, specifying the "workspaces:userVolumeEncryptionEnabled" as a condition in your policy will cause an error. For more information, see Specify WorkSpaces resources in an IAM policy.CommentShareEXPERTDzung_Nanswered 3 years ago"
"Hello,We are using AWS Managed Microsoft AD services , but recently the domain controllers(which are managed by AWS) has the issue, we can't resolve them because we don't have access to it and it seems without paying for premium support we can't ask AWS to fix the issue of their service. Please let us know what options do we have , because we are getting trust issues,RPC errors, we can't create or manage users, computer .FollowComment"
AWS Managed AD services
https://repost.aws/questions/QUt0MSgVHcTYqTFgGkjn2Iuw/aws-managed-ad-services
false
"0Hi GillesI understand your issue believe your could be configuration problem on your end. There are troubleshooting steps, first check your AD status you will there see steps for resolution for every status of the AD [1].Can follow the documentation on troubleshooting your AWS Managed AD [2].https://docs.aws.amazon.com/directoryservice/latest/admin-guide/simple_ad_troubleshooting_reasons.htmlhttps://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_troubleshooting.htmlCommentShareLwazianswered a year ago"
"Error message:"FileNotFoundError: [Errno 2] No such file or directory: '/opt/ml/processing/output/profile_case.html'"Background:I am working in Sagemaker using python trying to profile a dataframe that is saved in a S3 bucket with pandas profiling. The data is very large so instead of spinning up a large EC2 instance, I am using a SKLearn processor.Everything runs fine but when the job finishes it does not save the pandas profile (a .html file) in a S3 bucket or back in the instance Sagemaker is running in.When I try to export the .html file that is created from the pandas profile, I keep getting errors saying that the file cannot be found.Does anyone know of a way to export the .html file out of the temporary 24xl instance that the SKLearn processor is running in to S3? Below is the exact code I am using:import osimport sysimport subprocessdef install(package): subprocess.check_call([sys.executable, "-q", "-m", "pip", "install", package])install('awswrangler')install('tqdm')install('pandas')install('botocore==1.19.4')install('ruamel.yaml')install('pandas-profiling==2.13.0')import awswrangler as wrimport pandas as pdimport numpy as npimport datetime as dtfrom dateutil.relativedelta import relativedeltafrom string import Templateimport gcimport boto3from pandas_profiling import ProfileReportclient = boto3.client('s3')session = boto3.Session(region_name="eu-west-2")%%writefile casetableprofile.pyimport osimport sysimport subprocessdef install(package): subprocess.check_call([sys.executable, "-q", "-m", "pip", "install", package])install('awswrangler')install('tqdm')install('pandas')install('botocore')install('ruamel.yaml')install('pandas-profiling')import awswrangler as wrimport pandas as pdimport numpy as npimport datetime as dtfrom dateutil.relativedelta import relativedeltafrom string import Templateimport gcimport boto3from pandas_profiling import ProfileReportclient = boto3.client('s3')session = boto3.Session(region_name="eu-west-2")def run_profile(): query = """ SELECT * FROM "healthcloud-refined"."case" ; """ tableforprofile = wr.athena.read_sql_query(query, database="healthcloud-refined", boto3_session=session, ctas_approach=False, workgroup='DataScientists') print("read in the table queried above") print("got rid of missing and added a new index") profile_tblforprofile = ProfileReport(tableforprofile, title="Pandas Profiling Report", minimal=True) print("Generated carerequest profile") return profile_tblforprofileif __name__ == '__main__': profile_tblforprofile = run_profile() print("Generated outputs") output_path_tblforprofile = ('profile_case.html') print(output_path_tblforprofile) profile_tblforprofile.to_file(output_path_tblforprofile) #Below is the only part where I am getting errorsimport boto3import os s3 = boto3.resource('s3')s3.meta.client.upload_file('/opt/ml/processing/output/profile_case.html', 'intl-euro-uk-datascientist-prod','Mark/healthclouddataprofiles/{}'.format(output_path_tblforprofile)) import sagemakerfrom sagemaker.processing import ProcessingInput, ProcessingOutputsession = boto3.Session(region_name="eu-west-2")bucket = 'intl-euro-uk-datascientist-prod'prefix = 'Mark'sm_session = sagemaker.Session(boto_session=session, default_bucket=bucket)sm_session.upload_data(path='./casetableprofile.py', bucket=bucket, key_prefix=f'{prefix}/source')import boto3#import sagemakerfrom sagemaker import get_execution_rolefrom sagemaker.sklearn.processing import SKLearnProcessorregion = boto3.session.Session().region_nameS3_ROOT_PATH = "s3://{}/{}".format(bucket, prefix)role = get_execution_role()sklearn_processor = SKLearnProcessor(framework_version='0.20.0', role=role, sagemaker_session=sm_session, instance_type='ml.m5.24xlarge', instance_count=1)sklearn_processor.run(code='s3://{}/{}/source/casetableprofile.py'.format(bucket, prefix), inputs=[], outputs=[ProcessingOutput(output_name='output', source='/opt/ml/processing/output', destination='s3://intl-euro-uk-datascientist-prod/Mark/')])Thank you in advance!!!FollowComment"
How to save a .html file to S3 that is created in a Sagemaker processing container
https://repost.aws/questions/QU5abOieUyQZSFvyRwfApRVA/how-to-save-a-html-file-to-s3-that-is-created-in-a-sagemaker-processing-container
true
"1Accepted AnswerHi,Firstly, you should not (usually) need to directly interact with S3 from your processing script: The fact that you've configured your ProcessingOutput means that any files your script saves in /opt/ml/processing/output should automatically get uploaded to your s3://... destination URL. Of course there might be particular special cases where you want to directly access S3 from your script, but in general the processing job inputs and outputs should do it for you, to keep your code nice and simple.I'm no Pandas Profiler expert, but I think the error might be coming from here: output_path_tblforprofile = ('profile_case.html') print(output_path_tblforprofile) profile_tblforprofile.to_file(output_path_tblforprofile)Doesn't this just save the report to profile_case.html in your current working directory? That's not the /opt/ml/processing/output directory: It's usually the folder where the script is downloaded to the container I believe. The FileNotFound error is telling you that the HTML file is not getting created in the folder you expect, I think.So I would suggest to make your output path explicit e.g. /opt/ml/processing/output/profile_case.html, and also remove the boto3/s3 section at the end - hope that helps!CommentShareEXPERTAlex_Tanswered 9 months agoMarkus24135 9 months agoThis worked!!!!!!! Thank you so much!!!!!!!!!!!!Share"
"Experienced programmer, less experienced database user, beginner AWS user... second-guessing myself to death, so please have patience which what I'm sure is a beginner question.I'm looking at using DynamoDb as data storage behind my lambda. This will be a small database (under 10K records) and traffic will be low, so I'm undoubtedly overthinking it... but I haven't used NoSQL databases before, and I'm trying to figure out how to map from my conceptual data structures to DynamoDb's indexed-pile-of-mixed-record-types mindset.The DynamoDb developer's guide (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide) seems to be a good discussion of recommended design principles for this approach, but I'm still having trouble wrapping my head around those relatively abstract recommendations. I think it might help me a lot to see some examples of how people have defined DynamoDb records and keys for specific applications. If those were commented with explanations of why those design decisions were made, that might help even more. And I'm sure I'm not the only one who'd find best-practice examples useful to illuminate the best-practice theoretical discussion.Does such a collection exist? Haven't found it yet if so.Context follows, in case anyone cares:My application is a fairly trivial one: Indexing archives of a radio show for retrieval by episode number, by broadcast date (may be N:1 since rebroadcasts happen), and eventually perhaps by keywords (specifically guests on episodes, N:N since guests may appear multiple times). Since episodes are only one per day, this is a relatively small list -- increasing only at 365 per year, and with the rebroadcasts decades of production still have us under 5000 episodes total.The obvious data structure for in-memory implementation would be one table mapping unique episode number to episode details (which could include a list of broadcast dates and a list of keywords for that episode), one table mapping unique date to episode number for quick two-step lookup, and a table mapping keywords to lists of episode numbers (followed by list-intersection if multiple keywords are being matched upon). But that doesn't seem to be how DynamoDb wants data handled; the dev guide seems to prefer having all the records (and all the record types?) in a single conceptual table with secondary keys (which act like shadow tables, if I'm understanding this correctly) used both to separate them back out and to perform specific retrievals.Eventually I may want similar lookup for other shows that overlap this one. Unclear at this time whether that's best handled with a single table having show ID as one of the columns, or separate tables which could be unioned if I want to find all shows for a particular date or with a particular guest.I suspect that the best solution(s) is/are immediately obvious to an experienced DynamoDb user. But as a beginner I'm having trouble wrapping my head around it. Hence the desire to see how others have handled similar data patterns.I suppose I should also say that I'm not by any means locked into using DynamoDb. It just seems to be what's most commonly suggested for small-dataset evolving-data applications on AWS. If I'm barking up the wrong tree, pointers to better ones would be appreciated before I invest too much more heavily in this solution.FollowComment"
Is there a "DynamoDb by example" document anywhere
https://repost.aws/questions/QUZERMGu8RSDmboJPc0MJQ8g/is-there-a-dynamodb-by-example-document-anywhere
false
"3I would recommend you watch Rick Houlihan's Dynamodb office hours youtube videos. Rick models real use cases in each video and he explains each pattern he uses and why you should use them.When it comes to NoSQL databases you shouldn't think how data is organized but how you will access that data. Plus prioritize those patterns so you can optimize the patterns that are more commonly used. I would recommend you list all your access patterns, like:fetching an episode by episode number.fetching all episodes that occurred in given time range.fetching all episodes which include a given keyword list (this is a tricky one in dynamodb)Another key thing to take in mind is how partition key is built. You want your partition keys to be as distributed as possible so dynamodb can scale in easily. If you just have one single radio show (with many episodes). It looks to me a good PK here would be the episodeNumber, although that ties you up to have one single radio show.Since an episode may be broadcasted more than once, I would include a SK based on broadcastedAt (this gives you a bonus pattern, iterate over the different broadcast for a given episode number). Something like:|pk|sk|attributes||---------||<episodeNumber>|Metadata|<episode details>||<episodeNumber>|<broadcastedAt>|<you could duplicate episode details here depending on how reads/writes happen>|That will cover your first pattern + the bonus pattern of accessing different broadcasts of the same episode by date.The second pattern: fetching all episodes that occurred in given time range, will depend on how you will query that range, is it by day? other granularity? I would add a GSI which PK is a day, then within that partition you will have all episodes that occurred that day (if you need query more than one day, then you would need to run parallel queries though).|pk|sk|gsi1pk|gsi1sk|attributes||---------||<episodeNumber>|Metadata|||<episode details>||<episodeNumber>|<broadcastedAt>|<broadcastedAtDay>|<episodeNumber>|<you could duplicate episode details here depending on how reads/writes happen>|The third pattern is quite tricky as you don't know in advance how many keywords you have. If your app is a write-once-read-many application, then I would duplicate episode entries in different partitions based on those keywords, so you have data duplicated but optimized for reading. To do so, there are a few things your app must take in mind:writing an episode will be a mix of write/delete items in the database.you must sort keywords at for storage purposes.|pk|sk|gsi1pk|gsi1sk|attributes||---------||<episodeNumber>|Metadata|||<episode details>||<episodeNumber>|<broadcastedAt>|<broadcastedAtDay>|<episodeNumber>|<you could duplicate episode details here depending on how reads/writes happen>||<keyword1>|<broadcastedAt>|||<you could duplicate episode details here depending on how reads/writes happen>||<keyword2>|<broadcastedAt>#<episodeNumber>|||<you could duplicate episode details here depending on how reads/writes happen>||<keyword1>#<keyword2>|<broadcastedAt>#<episodeNumber>|||<you could duplicate episode details here depending on how reads/writes happen>|CommentSharecjuegaanswered a year ago2I would really recommend The DynamoDB book from Alex Debrie.There is also the cheatsheet with summary of best practices and patterns.CommentShareMGanswered a year ago1Consider DynamoDB, explained - A Primer on the DynamoDB NoSQL database. The authors blog also has a number of articles on DynamoDB.CommentShareRoBanswered a year agoKubyc Solutions a year agoThanks, reading through that -- it's answered some of my questions, so far. (This shouldn't be hard, I'm just stumbling over the shift in mindset.) And thanks for your patience!Share0Agreed, Rick Houlihan's the man to follow when learning about DynamoDB.Plenty of AWS tech talks/re:invent content on YouTube, he also makes regular appearances on the "Amazon DynamoDB | Office Hours" thread on the AWS Twitch channel.CommentShareDaniel Craigieanswered a year ago0Hi,You could also start with a single database document structure:{ "EpisodeId": { "S": "EP01" }, "Title": { "S": "Title" }, "Guests": { "SS": [ "Jacco", "John" ] }, "Keywords": { "SS": [ "aws" ] }, "AiringDates": { "SS": [ "2021-12-12", "2021-12-19" ] }}EpisodeId would be the partition key.All necessary query operations can be easily performed using a Scan. You will always get the full details of the episode in one operation.The API to access the data should be of more concern:createEpisode episodeId, airDates, guests, keywordsdeleteEpisode episodeIdgetEpisode episodeIdaddAirDate episodeId, airDateremoveAirDate episodeId, airDateaddGuest episodeId, guestremoveGuest episodeId, guestaddKeyword episodeId, keywordremoveKeyword episodeId, keywordgetEpisodesByAirDate airdategetEpisodesByKeyword keywordgetEpisodesByGuest guestIf you database grows and feel it is not performing any more or that you pay too much for the scans you can switch to using a more complicated database design. The API can stay the same.One probable improvement you might consider doing right away is using a separate table for the guests. And store their IDs in the episode table instead of the names. The API could use BatchGetItem if you want to return the details of the guests when getting episodes (potentially caching the guests).Going for a more complicated single-table database design is actually for access optimization which in this case might be immature.Regards, JaccoCommentShareJaccoPKanswered a year ago"
"I'm running about 20 different devices in my dev environment using Iot and the javascript device sdk (the device is running a node process). The devices connect fine, they receive shadow document updates, they receive jobs, they can post to topics, etc. I've enabled fleet indexing with thing connectivity enabled. When I run the query "connectivity.connected:true", the query always returns no devices. When I run the query "connectivity.connected:false", the query always returns all devices. I know the devices are connected, so I was wondering if I was missing something in my code that would make that connectivity value change to true. Code is below (with handlers for events removed): const name = this.getName(); const thingShadowParams = { privateKey: privatekeyBuffer, clientCert: clientCertBuffer, caCert: caCertBuffer, clientId: name + '_s', host: this.config.aws.iotEndpoint, }; // awsIot.thingShadow modifies the params object in place! const jobShadowParams = _.cloneDeep(thingShadowParams); jobShadowParams.clientId = name + '_j'; const d = this.shadow = awsIot.thingShadow(thingShadowParams); const sensorTopic = this.config.aws.sensorValueTopic + name; const imageTopic = this.config.aws.imageTopic + name; d.on('connect', () => { d.register(name, {}, () => { d.subscribe([ sensorTopic, imageTopic ]); d.get(name); }); }); d.on('status', (thingName, stat, clientToken, stateObject) => {// do some work }); d.on('delta', (thingName, stateObject) => {// do some work }); d.on('error', (error) => {// report error }); const j = this.job = awsIot.jobs(jobShadowParams); j.subscribeToJobs(name, function(err, job) { if (err) { return console.log(err); } instance.runJob(job); }); j.startJobNotifications(name, function(err) { if (err) { return console.log(err); } }); j.on('error', (error) => {// report error });Edited by: AndyA on Jan 9, 2019 2:34 PMFollowComment"
Iot connectivity.connected always false
https://repost.aws/questions/QUYn2kmd-5RWyvFVcRzn8sdg/iot-connectivity-connected-always-false
false
"0Hello Andy,Just wanted to confirm - are your devices have corresponding things in the registry? [1] Fleet indexing only indexes information for things that are present in the registry.[1] https://docs.aws.amazon.com/iot/latest/developerguide/thing-registry.htmlCommentShareAWS-User-8603511answered 4 years ago0Yes, all of my devices have thing representations in the registry. They have shadow state that goes back and forth correctly, etc.Edited by: AndyA on Jan 10, 2019 7:56 AMCommentShareAndyAanswered 4 years ago0Please share your account #, region and time frame where you were doing tests, we'll take a look. You can send these via PM.CommentShareAWS-User-8603511answered 4 years ago0ok, finally looked through the logs, sorry for this taking so long! The problem is that clientId for devices that are being connected are plain seemingly random UUIDs, and Registry does not have things with such names.You need to check client creation code to see if clientId is initialized correctly.CommentShareAWS-User-8603511answered 4 years ago0Mismatch in client id with thing name causes connection events ignored.CommentShareAWS-User-8603511answered 4 years ago0Alex@AWS wrote:Mismatch in client id with thing name causes connection events ignored.Alex@AWS wrote:Mismatch in client id with thing name causes connection events ignored.So this is a great clue. Looks like if I set the clientId to the thingName it starts working. So then I have another question. Since the clientId has to be unique on the connections, how do I start a thingShadow and a jobs refrences with the same clientId. If I use the same client id, both instances seem to connect / disconnect / and reconnect endlessly. So when I do the following:const thingShadowParams = {privateKey: <key>clientCert: <cert>caCert: <cert2>clientId: name,host: this.config.aws.iotEndpoint,};const jobShadowParams= {privateKey: <key>clientCert: <cert>caCert: <cert2>clientId: name,host: this.config.aws.iotEndpoint,};this.shadow = awsIot.thingShadow(thingShadowParams);this.job = awsIot.jobs(jobShadowParams);I get continuous reconnection issues. If I change the jobShadowParams clientId to be something else (for example by appending "_j"), the system seems to be working right. But that seems the be a wrong usage of the clientId, which seems to need to be the same as the thingName (otherwise the jobs connection will not notice that it reconnected). How do I create both refrences to the same object, so I can manage shadow changes and respond to jobs?Also, the documentation for the sdk states the following about clientID:NOTE: client identifiers must be unique within your AWS account; if a client attempts to connect with a client identifier which is already in use, the existing connection will be terminated.It doesn't say anything about it matching the thingName. So maybe the documentation needs to be updated in the sdk?Edited by: AndyA on Mar 15, 2019 2:13 PMCommentShareAndyAanswered 4 years ago0Multiple AWS IoT services could work via a single connection. I'm more familiar with Python SDK, it allows to specify pre-connected MQTT client when constructing shadow / jobs client, other SDKs should have similar capability.Regarding matching clientId with thingName - it is not required to use most of AWS IoT capabilities (except few, like connectivity indexing for Fleet Indexing or applying policies on thing groups).CommentShareAWS-User-8603511answered 4 years ago"
"In WAF & Shield page, I can't delete a Web ACL which is associated with a Cognito User Pool.The Cognito User Pool is already removed when an Amplify app was removed. The Cognito User Pool was generated by Amplify automatically.In this situation, when you go to WAF & Shield page > Web ACLs > Associated AWS resource in AWS console, you can see the error message below.ErrorAn unspecified error occurred. Check your network connectivity.You cannot unassociate the resource, so you cannot delete the Web ACL.How should I do? Thanks in advance.FollowComment"
How to delete Web ACLs associated AWS resoureces BUT which has been already removed?
https://repost.aws/questions/QUJXzhvLbLTea7StQaKl1skA/how-to-delete-web-acls-associated-aws-resoureces-but-which-has-been-already-removed
true
"0Accepted AnswerTime solved.I got up the next day and checked it, vanished.CommentShareMichihiro Otaanswered 7 months ago"
"When you upload FAQ documents, can you use markup within the Answer column in order to control how answers are formatted when displayed as part of a Lex Response? For example, is there a way to include a hyperlink within an Answer so that it is active in a Response?from: https://docs.aws.amazon.com/lex/latest/dg/faq-bot-kendra-search.htmlI found a FAQ question for you: ((x-amz-lex:kendra-search-response-question_answer-question-1)) and the answer is ((x-amz-lex:kendra-search-response-question_answer-answer-1)).Answer: Just click on the link.If I'm using the example Response given here with the example answer given here, is there a way to enter 'link' in the Answer so that it will be active when displayed as a Response within Lex?FollowComment"
Adding markup to FAQ answers
https://repost.aws/questions/QUr4HXDXvURpWPyY93_hmZPw/adding-markup-to-faq-answers
false
"0Kendra supports multiple document types so you might want to pick the appropriate document type such as HTML to support formattinghttps://docs.aws.amazon.com/kendra/latest/dg/index-document-types.htmlCommentShareYoungCTO_quibskianswered a year ago0Hi there!Lex currently does not support Markup in the responses.If you would like Lex to surface a document link, you could use the request attribute :x-amz-lex:kendra-search-response-document-link-<N>Please see here for how to produce Kendra responses in Lex:https://docs.aws.amazon.com/lexv2/latest/dg/built-in-intent-kendra-search.html#kendra-search-responseCommentShareMathildaAWSanswered a year ago"
"Hi MLOps Gurus,I'd like to seek guidance on my below situation.I am currently working on a Sagemaker project where I'm using the MLOPS template for model building, training, and deployment. I trained the model using the sklearn framework and registered it in the model registry. However, while creating the model deployment pipeline, I faced an issue with the default cloudformation template resources. Specifically, when attempting to use both the ModelPackageName and custom image as parameters for the model creation, I encountered an error. I discovered that Sagemaker expects a "ModelDataUrl" parameter when using a custom image.Default Clouformation template:Resources: Model: Type: AWS::SageMaker::Model Properties: Containers: - ModelPackageName: !Ref ModelPackageName ExecutionRoleArn: !Ref ModelExecutionRoleArnHow I modified:Resources: Model: Type: AWS::SageMaker::Model Properties: Containers: - Image: !Ref ImageURI ModelDataUrl: !Ref ModelData Mode: SingleModel #This defaults to single model change to "MultiModel" for MME Environment: {"SAGEMAKER_PROGRAM": "inference.py", "SAGEMAKER_SUBMIT_DIRECTORY": !Ref ModelData} ExecutionRoleArn: !Ref ModelExecutionRoleArn My question is: How can I retrieve the trained model from codebuild pipeline and add "ModelDataUrl" parameter and dynamically pass it to the endpoint-config cloudformation template every time I execute the pipeline?Please guide me the steps to progress, thank you!FollowComment"
Using MLOPS template with custom inference code
https://repost.aws/questions/QUVOG_sJsZSk-cvJvypdXY5A/using-mlops-template-with-custom-inference-code
false
"hello,When i am login as a root user in aws console and try to open any object file from s3 bucket using object url in browser i am getting following error<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>JTZXNVDSYH9ZKM73</RequestId><HostId>pyGf1VYpvjATV3wBHn3fVCqeJnsaerNqThMQS1fdcA7hzC1gI9UdI7Z2OlJZ0aa1Q9oyl+3BRiE=</HostId></Error>So help me in this query, how can i allow any single user to access file using object url?FollowComment"
how can i make s3 bucket make public for single iam user or root user or owner of bucket
https://repost.aws/questions/QUxTt_3fX2RhmYsWgiRFGdug/how-can-i-make-s3-bucket-make-public-for-single-iam-user-or-root-user-or-owner-of-bucket
false
"0You can access an object wit Object URL only if the object is public (more information here). The Object URL does not carry any authorization context. Any authorized user can still download the object from the S3 console clicking on the "Download" button on the top right, or using the AWS CLI with the aws s3 cp command (see the documentation).CommentShareRoberto_Manswered 9 months ago"
Hi allI am trying to replicate a dynamodb table where data is already present.where the replication is taking place through cdk of the stackso when the replication is taking there was issue I am facing“message”: “Failed to create a the new replica of table with name: ‘ContributionModelPromotedAssets’ because one or more replicas already existed as tables.”errorTypeValidationExceptionmessageCreate/Update/Delete of replica is not allowed while the replica is being added to table with name: ‘ContributionModelPromotedAssets’ in region: ‘us-east-1’.can someone help me to overcome this issue and create the replication of the tableso replica is getting created in the lower environments(dev) but the problem is not getting created in the gamma accountsFollowComment
Replica issue in dynamodb
https://repost.aws/questions/QUyDOtCUOpSuWmZNwtGFFgmA/replica-issue-in-dynamodb
false
"0Maybe this is the issue? From: Replicas.You can create a new global table with as many replicas as needed. You can add or remove replicas after table creation, but you can only add or remove a single replica in each update.CommentShareEXPERTkentradanswered 3 months ago"
"We're using a 17000GB io2 Block Express volume attached to an r5b.2xlarge instance in us-east-1 and us-east-2. We can create snapshots of the volume, no problem. However, we cannot create encrypted volumes from these snapshots due to the error "Cannot use encryption key ID when creating volume from encrypted snapshot." Using an encrypted snapshot smaller than 17000, I can create encrypted volumes in sizes up to 16384GB. As soon as I try 16385GB, I get that error. This is true in us-east-1 and us-east-2. Also, our io2 Block Express quota has been increased from 20TiB to 40TiB, so I don't think it's that. It's a customer managed KMS key that we've been using for years.Anyone have any clues?FollowCommentRodney Lester a year agoDoes your IAM credentials have permissions to use the KMS key?Share"
creating io2 Block Express encrypted volumes from encrypted snapshots is broken
https://repost.aws/questions/QUYMqsTgyUS6W2LetKWb7veA/creating-io2-block-express-encrypted-volumes-from-encrypted-snapshots-is-broken
true
"0Accepted AnswerI got an update from AWS Support. There is a bug where if you use an encrypted snapshot to create an encrypted volume over 16384 GB, and specify the KMS key, you get that error. The UI specifies the KMS key even though it doesn't need to; a volume will be automatically encrypted with the same key as the one used by the snapshot. The workaround is to use the CLI to accomplish the same task until they fix the bug.Thanks for the suggestions all.CommentShareJamie Grueneranswered a year ago0To clarify, you can launch a R5b instance with encrypted io2 volume greater than 16 TiB. However, the snapshot must be encrypted and the same key must be used while restoring the volume. If the snapshot is unencrypted, then you can make an encrypted copy to create volume/launch instanceCommentShareAWS-User-3235162answered a year agoJamie Gruener a year agoRight, I'm with you. Everything is encrypted with the same key. We're simply trying to replace an existing encrypted volume with a new one.Share0This is a restriction on io2bxYou can’t launch an R5b instance with an encrypted io2 Block Express volume that has a size greater than 16 TiBYou can refer here for more detailshttps://aws.amazon.com/blogs/aws/amazon-ebs-io2-block-express-volumes-with-amazon-ec2-r5b-instances-are-now-generally-available/CommentShareDevinder-theDBGuyanswered a year ago0Interesting. The full relevant quote:You can’t launch an R5b instance with an encrypted io2 Block Express volume that has a size greater than 16 TiB or IOPS greater than 64,000 from an unencrypted AMI or a shared encrypted AMI. In this case, you must first create an encrypted AMI in your account and then use that AMI to launch the instance.In our case the AMI and the snapshot are both encrypted and not shared. Note that I'm not trying to launch an instance here--I already have it.Here's the use case. DB server with an encrypted io2 Block Express data volume at 17000 GB with 3000 IPS. Key is ours and not shared.Stop instanceMake snapshot of data volumeMake new volume from that snapshot but 20000 GB instead of 17000 GBSwap data volumes on the instanceStart instanceEnjoy the bigger drive!It's at the third step that things fail. But from what I can tell I'm not doing anything prohibited.CommentShareJamie Grueneranswered a year ago0I can't respond to @Rodney Lester (how does this thing work?), but yes, I have full permissions on the key.CommentShareJamie Grueneranswered a year ago"
"I use Cognito Sync "updateRecords" function in JavaScript SDK v3 (https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CognitoSync.html#updateRecords-property) and set "Op" in "RecordPatches" to "remove", but specified record(s) not deleted and "SyncCount" increased by 1.I also get the same result with same operation in AWS CLI.What is the right way to delete a record key (not just record value) in dataset ?FollowComment"
How to delete a record key in Cognito identity pool dataset?
https://repost.aws/questions/QUnnTeWBgoQZybgG8xfxmfAg/how-to-delete-a-record-key-in-cognito-identity-pool-dataset
false
Is there a way to create a list of resources based on the date of creation? This is for understanding what resources were created after a MAP Agreement signed . And collect them in one place.FollowComment
How to list services by resource creation date?
https://repost.aws/questions/QUEAdg5Ze5S72JH1EPntXDSw/how-to-list-services-by-resource-creation-date
false
"0HelloYou can use AWS config Advanced query to get the information of all resources single account and multiple accounts in a single place using Config aggregatorGo to Advanced Query config console in AWS link hereCopy the Query into the Console and run the QueryPlease find the below query to get the information of resources by the date > 2021-07-26, You can change this date by your MAP date and you will get results , You can also export in CSVSELECT resourceId, resourceName, resourceType, configuration.instanceType, tags, configuration.creationDateWHERE configuration.creationDate > '2021-07-26'ORDER BY configuration.creationDateIf you want to filter you can change by the data types hereThank YouGKCommentShareGKanswered a year agotatahari_aws 7 months agoAfter following the steps mentioned by GK, i found that stopping and starting EC2 instances updates the creationDate as launchDate, sue to which this query gives incorrect results. Has anyone else faced the same issue?Share"
"We use AWS WorkDocs to distribute large files to our users , these files tend to be above 5GB and we share the files through the public link option that AWS offers, a number of users have complained of extremely slow download speeds how can this be improved ? Can anything be done from Amazons end ?FollowComment"
Is there a way to improve AWS Workdocs download speeds ?
https://repost.aws/questions/QUegAz9-qgSXyfZreKlKaG9g/is-there-a-way-to-improve-aws-workdocs-download-speeds
false
"1Hi there!From the notes, I understand that you use AWS WorkDocs to distribute large files to your users, and would like to know if there is a way in which the download speed can be improved. Please note that the AWS Workdocs download speed can be affected by multiple factors i.e. the region your WorkSpace resides in, network connectivity.For the best WorkSpace experience, you should be within 2,000 miles of the AWS Region that your WorkSpace is in. You can refer to the Regional Products and Services page for details of Amazon WorkSpaces service availability by region [1]. You also can check performance by using the Amazon WorkSpaces’ connection health check website [2].Depending on your use case, S3 might be an alternative since you can use Amazon S3 Transfer Acceleration. Amazon S3 Transfer Acceleration is a bucket-level feature that enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket [3&4].I hope you find the above information helpful.Have a great day ahead!References:[1] https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/[2] https://clients.amazonworkspaces.com/Health.html[3] https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration.html[4] https://aws.amazon.com/s3/transfer-acceleration/CommentShareLettyanswered a year ago"
"We have API Gateway deployed in account A and want to send Access Logs to a Firehose in account B so all auditing services and billing are separated. But after Firehose ARN from account B was set in API Gateway, we are getting the error "Invalid ARN specified in the request. ARN must belong to account A and region should be X".Is it possible that we are missing some permission configuration here? Or is just that API Gateway does not have the option to send Access Logs to another account?FollowComment"
Can API Gateway send Access Logs to Firehose in a different account?
https://repost.aws/questions/QUJp6lzfihT42unQpR5GsBxQ/can-api-gateway-send-access-logs-to-firehose-in-a-different-account
false
"0The recommendation would be to have API Gateway in account A, Kinesis Firehose in account A, and S3 target bucket + analytics in account B, you could find the example on how to achieve this here. The account A would also be charged by the usage of Kinesis Firehose. You could use tag-based cost allocation to know that cost in particular if you want to internally allocate that cost (although from my experience Firehose's cost shouldn't be too much to go through this hassle)CommentSharePablo Guzmananswered 6 months ago0Thanks for the recommendation, we'll do that. Can you confirm it's not possible to send API Gateway's Access Logs to Firehose in a different account?CommentSharerePost-User-8932753answered 6 months ago"
"Has anyone had the need to custom build and containerize of our cognitive services so it can run on premise? ( Speech, vision, image etc)FollowCommentMia C 9 months agosuggest: our AI servicesShare"
"Custom build and containerize cognitive services to run on prem ( Speech, vision, image etc)"
https://repost.aws/questions/QUmYCIf6J2Q9eGRSRNX6wlBQ/custom-build-and-containerize-cognitive-services-to-run-on-prem-speech-vision-image-etc
true
"0Accepted AnswerHi there,Thanks for reaching out. Since this question is for multiple different services, I can try providing an answer for Amazon Rekognition. We are not currently supporting custom build and containerized of cognitive services to run on premise. So far we have not yet received customer request on extending support for that.CommentShareAWS-hcsanswered 9 months ago"
HiI was trying to access the IoT core broker with MQTT Explorer. I can't connect to it somehow ?i've been using: mqtt://****-ats.iot.eu-west-3.amazonaws.com:8883and insert the client cerftificates in to the mqtt explorer but somehow that doesn't seem to work ? Does someone has any expierence with this ? Because i have no clue what i'm doing wrong at the moment.FollowComment
IoT core connection with MQTT explorer
https://repost.aws/questions/QUz_jeKNYJRiKbao-gQ82QCg/iot-core-connection-with-mqtt-explorer
false
"1MQTT Explorer subscribes by default to $SYS/# topic. This topic is not supported by AWS IoT Core and will cause the broker to disconnect the client.To fix, open the Advanced settings and remove the $SYS/# topic from the subscription list by clicking onto it.Ensure also the the clientId you are using to connect is allowed by the AWS IoT Policy attached to the certificateThis is the setup in the main window:CommentShareEXPERTMassimilianoAWSanswered 3 months agorePost-User-1452554 3 months agoAlready did that. But i can't even connect to the broker? Is the port 8883 correct ? And what about TLS ?ShareMassimilianoAWS EXPERT3 months agoTLS must be enabledSharerePost-User-1452554 3 months agoThanks it works now!Share0Hi! can you provide a little bit more detail of you setup ? With the information you give I can think of two options :Have you chained the Amazon Root CA with your clients certificate ? You can have a lookhttps://docs.aws.amazon.com/iot/latest/developerguide/server-authentication.html#server-authentication-certsIs the certificate active ? Is it associated with the correct policy with the needed permissions ?CommentSharerePost-User-9286330answered 3 months agorePost-User-1452554 3 months agoAt the moment our remote installation is sending data through mqtt to IoT Core. I can see that the data is correctly comming in by using the test explorer. Now i want to access the incomming data on the IoT Core with the MQTT explore application to see if everything is working fine and for further development. I'm using the certificates from the excisting connectionShare"
"Trying to set up a sub domain of a domain this is 1) created with a Reusable Delegation set, that also host the white label DNS records.The error I get is:Error: error creating Route53 Hosted Zone: ConflictingDomainExists: Cannot create hosted zone with DelegationSetId DelegationSetId:N07332301DF5CDCEBAHPP as the DNSName <my sub domain>. conflicts with existing ones sharing the delegation set status code: 400,AWS documentation on these concepts is very this on the ground.To quote:Create a public hosted zone: Two hosted zones that have the same name or that have a parent/child relationship (example.com and test.example.com) can't have any common name servers. You tried to create a hosted zone that has the same name as an existing hosted zone or that's the parent or child of an existing hosted zone, and you specified a delegation set that shares one or more name servers with the existing hosted zone. For more information, see CreateReusableDelegationSet.There is not much on this in google land but did find this old forums post to explain the likely issue but don't know best approach to resolve.It would be great is AWS would update documentation to explain how to do this ?FollowComment"
Route 53 - ConflictingDomainExists - subdomains with (White-Labeled) Reusable Delegation set
https://repost.aws/questions/QUd0v-k-9CQpqs_7s-WRWhZA/route-53-conflictingdomainexists-subdomains-with-white-labeled-reusable-delegation-set
true
0Accepted AnswerI had same problem in the past. You can try to mange separated delegation sets for each tld hosted zone and secondary level hosted zone.company.com -> delegation-set-primarycompany.io -> delegation-set-primarydev.company.com -> delegation-set-secondarydev.company.io -> delegation-set-secondaryprod.company.com -> delegation-set-secondaryCommentShareEXPERTposquit0answered a year ago
"I would like to assign the admin permission for specific resources (EC2, RDS, volumes, snapshot), how to group these resources from administration's perspective? This resource group will also need use some components in my VPC (subnet, routing tables, security groups, ...), is it possible to isolate them from my other services?FollowComment"
Is it possible to set up an IAM account for 3rd party to look after the specific resources?
https://repost.aws/questions/QU1rnoy6k2Qdqs96d1seQClA/is-it-possible-to-set-up-an-iam-account-for-3rd-party-to-look-after-the-specific-resources
true
"0Accepted AnswerThis is a good resource you can referWhen third parties require access to your organization's AWS resources, you can use roles to delegate access to them. For example, a third party might provide a service for managing your AWS resources. With IAM roles, you can grant these third parties access eiito your AWS resources without sharing your AWS security credentials. Instead, the third party can access your AWS resources by assuming a role that you create in your AWS account. To learn whether principals in accounts outside of your zone of trust (trusted organization or account) have access to assume your roles, see What is IAM Access Analyzer?.Providing access to AWS accounts owned by third parties - https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.htmlAlso check on **Establishing your best practice AWS environment **- https://aws.amazon.com/organizations/getting-started/best-practices/CommentShareEXPERTAWS-User-Nitinanswered 9 months ago"
"Hi, we find there is a very serious problem about the Workmail Permission function.We use AWS Console to set Permission in user A to allow user B to have Full Access and Send As permissions.User B use Workmail web application to login his account, find that he can add all other users' folders in the Open other inbox, so he can read all users' emails in the organization.FollowComment"
Permission problem
https://repost.aws/questions/QUkEHKbFycRhyMbdkP14ZL2A/permission-problem
false
"0Hi,I'm sorry to hear you're experiencing problems with your users. I took a look at your account and it seems for 1 organization you have configured a migration admin. Could that be the user that can open all Mailboxes?Kind regards,RobinCommentShareMODERATORrobinkawsanswered 2 years ago0Hi,Sorry for making a mistake. I just found out that is exactly what you said here. The user B is the migration admin.Edited by: csl on Jul 5, 2021 1:49 AMCommentSharecslanswered 2 years ago"
What is the relevant API to collect costs associated with AWS services in the context of carbon footprints?How do I identify the endpoint? For example: I have services running in my sanbox that I would like to know how much they are costing me and which APIand which APIs allow me to retrieve this information?Is it possible to list the APIs that allow me to collect financial and ecological data from AWS?For each API is it possible to describe the information collected?Thanks for your help.FollowComment
What is the relevant API to collect costs associated with AWS services in the context of carbon footprints?
https://repost.aws/questions/QUkKgIs1XlRx2OeQAs9a9Wgg/what-is-the-relevant-api-to-collect-costs-associated-with-aws-services-in-the-context-of-carbon-footprints
false
"0AWS offers two APIs that you can use to query prices:With the AWS Price List Bulk API, you can query the prices of AWS services in bulk. The API returns either a JSON or a CSV file. The bulk API retains all historical versions of the price list.With the AWS Price List Query API, you can query specific information about AWS services, products, and pricing using an AWS SDK or the AWS CLI. This API can retrieve information about certain products or prices, rather than retrieving prices in bulk. This allows you to get pricing information in environments that might not be able to process a bulk price list, such as in mobile or web browser-based applications. For example, you can use the query API to fetch pricing information for Amazon EC2 instances with 64 vCPUs, 256 GiB of memory, and pre-installed SQL Server Enterprise in the Asia Pacific (Mumbai) Region. The query API serves the current prices and doesn’t retain historical prices.Below are the few links that would help you better understand the same. Hope this helps.Query API- https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/using-pelong.htmlBulk API- https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/using-ppslong.htmlBest practices for Cost Explorer API - https://docs.aws.amazon.com/cost-management/latest/userguide/ce-api-best-practices.htmlAPI Calls- https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/awsbilling-api.pdfCommentShareSUPPORT ENGINEERAWS-User-Chiraganswered 10 months agorePost-User-3327100 10 months agoHelloI may have expressed my request incorrectly, I apologize. The links you sent me, I have consulted them several times.Several times but I can't find the URL or API that allows me to collect my ecological and financial footprint data and services associated with the periodic user cost, then display it to me in json or csv format. (My services and cost associated with user aws periodically).ThanksShare"
"Hi Folks,I have a table called demo and it is cataloged in Glue. The table has three partition columns (col_year, col_month and col_day). I want to get the name of the partition columns programmatically using pyspark. The output should be below with the partition values (just the partition keys)col_year, col_month, col_dayCould you please help me in getting the desired output?Thank youRegards,APFollowComment"
How to retrieve partition columns from Glue Catalog table using pyspark
https://repost.aws/questions/QUEyauC42aSeWz_O-W6g5oLQ/how-to-retrieve-partition-columns-from-glue-catalog-table-using-pyspark
false
"0Hello,Generally, the last columns are the partitions.So, you can try get schema and then convert to list and read last n columns which are partitions.https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-dynamic-frame.htmlhttps://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-dynamic-frame.html#aws-glue-api-crawler-pyspark-extensions-dynamic-frame-schemaAs a solution other than in PySpark you can do get table API call and get the partitions and use them in code.https://awscli.amazonaws.com/v2/documentation/api/latest/reference/glue/get-table.html#outputCommentShareSUPPORT ENGINEERShahrukh_Kanswered 2 months ago"
The documentation here https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/registrar-tld-list.html states that we should add a aws:rePost comment if we need top level domains.I want to register .at domains for austria.FollowComment
When will we be able to register .at top level domains with Route53
https://repost.aws/questions/QUVN0tuBT2TWegEpccB3eBtw/when-will-we-be-able-to-register-at-top-level-domains-with-route53
false
"0Hello there!Currently for the NEW TLD request you need to create a Feature Request via Customer Support (CS)/ Technical support (TS). I would suggest you to open a case with the CS for the same. There is no ETA as AWS does not publicize roadmap items, but you can check in periodically to see if this domain extension is available by reviewing these AWS documentation pages:Domains that you can register with Amazon Route 53 -- https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/registrar-tld-list.html -- [1]What's New with AWS? -- https://aws.amazon.com/new -- [2]AWS blogs -- https://aws.amazon.com/blogs/aws/ [3]That said, one way to use R53 (for better integration with AWS services, for example) without actually transfering the domain, is to update the name servers in your current registrar with R53 name servers. You can refer to the process here. [4]https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-in-use.html -- [4]CommentShareGGanswered 7 months ago"
"Hi,I'm trying to create a new Inspector Assessment Template in the console UI. I'm running into the SNS topic owned by the same account that I want to publish findings to not showing up in the list of SNS topics presented in the console UI list even when I have other Assessment Templates in the same account already publishing findings events to the same SNS topic. The existing Assessment Templates publishing to this SNS topic were created via a CloudFormation Template and not via the UI. Clearly there's either something wrong with console itself or the permissions in SNS for Inspector console to be able to generate a complete list of SNS topics.What do I need to do get all SNS topics displayed in the list in the console UI?Regards,-KurtFollowComment"
SNS topic not showing up in Inspector console.
https://repost.aws/questions/QUxNAEjYvFTUu1Ml9plHTI1g/sns-topic-not-showing-up-in-inspector-console
false
"0Hey all,Working with some excellent Customer Support Engineers (CSEs) on the Inspector and SNS support teams we were able to narrow this symptom down to a bug in the Inspector console client side code where only the first page of SNS topics (first 100 topics) is ever fetched by the Inspector client side code. So any SNS topic that is not in that first page is never presented in the displayed list nor can you enter it manually because input is validated against the incorrectly constrained client side list.The Inspector team have a bug in their backlog to address this defect.<Insert the usual "can't say when" boilerplate here.>Regards,-KurtCommentShareklarsonanswered 2 years ago"
"Gentlemen,Usually the driver created with the workdocsclient has the name of the machine.Is there any way to put the username that was used in the workdocsclient.FollowComment"
User name on virtual disk
https://repost.aws/questions/QUe5S61MbYQ5a-Ieqcr_Ix7A/user-name-on-virtual-disk
false
"0Yes, it is possible to include the username that was used to create the WorkDocs client in the driver name. To do this, you will need to pass the username as a parameter to the WorkDocs client constructor when you create the client.For example, you can use the following code snippet to create a WorkDocs client and include the username in the driver name:from workdocs_client import WorkDocsClientclient = WorkDocsClient(username='YourUsername', driver_name='WorkDocsClient-YourUsername')You can then use the client variable to interact with the WorkDocs service, and the driver name will include the username that was used to create the client.Keep in mind that you can use any string variable instead of 'YourUsername' in the username parameter.CommentSharejayamaheizenberganswered 4 months agoRoney 2 months agoCould you help me how to proceed?How could put user name in virtual driver.Share0Would this code be in the machine's CMD?CommentShareRoneyanswered 2 months ago0How can I use this code to change the driver name?from workdocs_client import WorkDocsClientclient = WorkDocsClient(username='YourUsername', driver_name='WorkDocsClient-YourUsername')I really don't know where to use it.CommentShareRoneyanswered 2 months ago"
"Hello, my Credit Card has been replaced with a new one (same number but different expiry date and CVV)I was warned that the monthly payment failed, and this is for sure the reason. Now I have updated all the data.Question is: will you retry the payment today or tomorrow?Or do I have to proceed with a manual payment and you will not make any further attemp till endo of April?Please give instructions.AlbertFollowComment"
Monthly payment
https://repost.aws/questions/QUN62xIVHhQ6a2lSEG0Tar0w/monthly-payment
true
1Accepted AnswerThis happened to me recently. Go to Payments under the AWS Billing Dashboard. You should see any outstanding invoice there and you can Complete Payment.CommentShareEXPERTkentradanswered 2 months ago
Need help with using the Get Transcript API endpoint to get the transcript for a contact ID.Unclear on which URL prefix must be used along with the endpoint mentioned in the documentation. And how to get the connection token for the Participant service from the amazon-connect-streams javascript library.Thanks in advance.FollowComment
Amazon Connect Participant Service: GetTranscript HTTP API Guidance
https://repost.aws/questions/QUsoWaZ5hHTAmCWob0kxYZKw/amazon-connect-participant-service-gettranscript-http-api-guidance
false
"0For the Amazon Connect Participant Service endpoints, follow this linkhttps://docs.aws.amazon.com/general/latest/gr/connect_region.htmlan example for us-east-1 is participant.connect.us-east-1.amazonaws.com (HTTPS)to get the Connection Token to use with the GetTranscript, you need to save the Connection Token when you call CreateParticipantConnection. To call CreateParticipantConnection initially, you would have gotten a Participant Token from calling StartChatContact.The sequence is explained in the bottom half of this bloghttps://aws.amazon.com/blogs/contact-center/reaching-more-customers-with-web-and-mobile-chat-on-amazon-connect/Perficient also did a nice job explaining this with GetTranscript as wellhttps://blogs.perficient.com/2019/12/02/amazon-connect-chat-creating-your-own-customer-chat-experience/CommentShareClarenceChoianswered 2 months ago"
"I have an outbound contact flow in which we want to check the call progress. There is an block in contact flow "Check Call Progress", the issue is that this block is only available for the outbound campaigns not for simple outbound calls. We want to have same option or similar workaround for simple outbound calls. Kindly guide me if we can implement same block in simple outbound calls..FollowComment"
How to check call progress in simple outbound call flow in Amazon Connect?
https://repost.aws/questions/QUuozt1s9MR_OH87vrMF_SSg/how-to-check-call-progress-in-simple-outbound-call-flow-in-amazon-connect
false
"0Hello,This is Tom from AWS Support.For the Contact Flow Block of "Check Call Progress" [1], it is true that the block can only be set on Campaigns. At this moment, unfortunately, there is no similar block for simple outbound calls.An alternative workaround would be to use Flow Block "Set recording and analytics behavior" [2] so that Contact Lens can reveal transcript after a call completed [3][4]. From there, either a custom-build automated solution or human verification (i.e. managers) can help and validate if the call is answered and whether answered by answering machines.I hope the above information helps.List and References[1] https://docs.aws.amazon.com/connect/latest/adminguide/check-call-progress.html[2] https://docs.aws.amazon.com/connect/latest/adminguide/set-recording-behavior.html[3] https://docs.aws.amazon.com/connect/latest/adminguide/view-call-transcript-ccp.html[4] https://docs.aws.amazon.com/contact-lens/latest/APIReference/API_ListRealtimeContactAnalysisSegments.htmlCommentShareSUPPORT ENGINEERTom_Tanswered 5 months ago"
I encountered this error when trying to update a record in the Iceberg table by Athena:"GENERIC_INTERNAL_ERROR: Fail to commit without transaction conflict"The Iceberg V2 table is created by Spark job using Glue as the physical data catalog for Iceberg. Any clue what might be the cause? thxFollowCommentazorman 8 months agoSame problem here!Share
Athena: encounter errors when update Iceberg table
https://repost.aws/questions/QUoFbkHLKCQ5WwAVLeEfizGA/athena-encounter-errors-when-update-iceberg-table
false
"Currently AWS Cognito stores users in one region. This causes all of client network requests go to this specific region, which can cause significant latency. (Imagine a scenario when Cognito region is US-east and client is in Australia)Global solve issues like that (Backend service operate in region A, but clients can connect to GlobalAccelelator locations which reduce latency)It there any simple way of integrating Cognito and Global Accelerator in such a way that GlobalAccelelator will just 'proxy' all traffic to Cognito region?FollowComment"
Integration of Global Accelerator with Cognito
https://repost.aws/questions/QU6NI3EjtdTH2g0afaavgXcw/integration-of-global-accelerator-with-cognito
false
"0No, there's no simple way of doing this.Note that you're dealing with (almost) the worst-case latency. It'd be better (if possible) to use Cognito in us-west-2 - still low latency for US customers; better for Australia. That said, putting it in Australia would be absolutely the best. ;-)Please get in touch with your local AWS Solutions Architect - they can take this feedback and add it to a feature request.CommentShareEXPERTBrettski-AWSanswered a year ago"
I need to get the policy details through my java code. Currently I am using GetPolicyResult and I am giving my policy ARNGetPolicyRequest request = new GetPolicyRequest().withPolicyArn(my_arn);GetPolicyResult response = iam.getPolicy(request);I am following below doc for the samehttps://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-iam-policies.html#getting-a-policyBut i am unable to get the policy detailesI am getting "com.amazonaws.services.identitymanagement.model.InvalidInputException - ARN is not valid " i have given the arn of my policy that is mentioned in my policy aws console but still it's saying invalid .FollowComment
Unable to fetch iam policy detailes in GetPolicyResult
https://repost.aws/questions/QU9W4tAHOzQGaWXDo11ahbMw/unable-to-fetch-iam-policy-detailes-in-getpolicyresult
false
"0Hello,Unfortunately, I am unable to fully answer this question without additional context. The InvalidInputException error is defined as "The request was rejected because an invalid or out-of-range value was supplied for an input parameter." [1].I believe you already double checked the ARN format, I also share the format in [2] for your reference.If this doesn't solve your issue I would recommend you to open a case with AWS Premium Support from the account where the error is occurring, so we can investigate the issue for you. Since this forum is public, it is not recommended to share account specific information.References:[1] - https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/iam/IamClient.html#getPolicy(software.amazon.awssdk.services.iam.model.GetPolicyRequest)[2] - https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.htmlCommentSharerePost-User-5962359answered 2 months ago"
"A customer is asking what and when an API can be released to delete ALL records within a DynamoDB table? Also, if they manage to create and drop the entire table intentionally from app, how they can take care of the related "Alert".FollowComment"
Delete all records in DynamoDB
https://repost.aws/questions/QU-0WDhD7XRtCUEB_b6s-7zw/delete-all-records-in-dynamodb
true
"0Accepted AnswerDynamoDB tables are truly only provisioned where a user needs to worry about the keys/Indices involved. My recommendation would be to delete the table, and re-provision it using a CloudFormation template for consistency.I have a hunch that this customer is very used to classic relational databases, and doesn't have as much experience with NoSQL implementations like DynamoDB.I would consider tuning IAM permissions for the application to not allow deletion of the table itself to protect against accidental total deletion.Additionally, I believe it is possible to create a CloudWatch alert that would fire on any deleted tables in a production environment.CommentShareAWS-User-7579179answered 7 years ago"