Description
stringlengths
6
76.5k
Question
stringlengths
1
202
Link
stringlengths
53
449
Accepted
bool
2 classes
Answer
stringlengths
0
162k
"Totally confused. Rest API documentation indicates that a date range can be used in the params, but it doesn't work.var params = {accountId: 'STRING_VALUE', /** required /vaultName: 'STRING_VALUE', / required **/jobParameters: {ArchiveId: 'STRING_VALUE',Description: 'STRING_VALUE',Format: 'STRING_VALUE',InventoryRetrievalParameters: {EndDate: 'STRING_VALUE',Limit: 'STRING_VALUE',Marker: 'STRING_VALUE',StartDate: 'STRING_VALUE'} ...};glacier.initiateJob(params, function(err, data) {if (err) console.log(err, err.stack); // an error occurredelse console.log(data); // successful response});https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Glacier.htmlIs an inventory-retrieval by date range available?I don't want to return everything, only the inventory of a specific date range.Edited by: gregbolog on Apr 14, 2020 10:31 AMFollowComment"
inventory-retrieval by date range
https://repost.aws/questions/QUKJ3PtDDsQFm0Pxqc_c-Azg/inventory-retrieval-by-date-range
false
0doesn't like fractional secondsCommentShareGregory-7596671answered 3 years ago
"I am serving images from S3 and want to migrate to CloudFront.The S3 bucket is ACL-enabled. Some files are made public (ACL: public-read) and some are private, so they can be accessed like (where public files don't require signature):public -> https://xxx.s3.ap-northeast-1.amazonaws.com/public.jpgprivate -> https://xxx.s3.ap-northeast-1.amazonaws.com/private.jpg?AWSAccessKeyId=…&Signature=…&Expires=…But when I set up CloudFront for this S3 bucket:If I don't restrict viewer access (in Behavior setting), both public and private files can be accessed without signature.If I restrict viewer access using the key pair, then both types require signature in the URLs.Is it possible to set up this as S3 does, which means, requires signature based on the ACL of the objects in S3?FollowComment"
How to require CloudFront URL signing based on S3 object permission
https://repost.aws/questions/QUXcB0z_m_Q_66GDl5XBaWGg/how-to-require-cloudfront-url-signing-based-on-s3-object-permission
false
"0Yes, it is possible to configure CloudFront to require signatures based on the ACL of the objects in S3.To achieve this, you can use CloudFront's Origin Access Identity (OAI) feature. This feature allows you to create a special CloudFront user that can access your S3 bucket, while denying access to all other users.setup instruction:Create a new CloudFront distribution and set your S3 bucket as the origin.In the "Origin Access Identity" section of the distribution settings, create a new identity and grant it read access to your S3 bucket.In the S3 bucket permissions, update the bucket policy to grant read access to the CloudFront OAI.Configure your CloudFront distribution to require signed URLs or cookies, depending on your requirements.With this setup, CloudFront will only allow access to objects in your S3 bucket if the request is made through the CloudFront distribution and includes the required signature. Public objects in your S3 bucket will still be accessible without a signature, while private objects will only be accessible through the CloudFront distribution with the required signature.CommentSharemishdaneanswered 2 months ago0Thank you for answering!I have a question about this: "Configure your CloudFront distribution to require signed URLs or cookies"At this point, all URLs with the CloudFront URL will require signature, is that right?What I would like:public -> https://123.cloudfront.net/public.jpgprivate -> https://123.cloudfront.net/private.jpg?[Signature_of_CloudFront]But requiring signed URLs would affect both public/private URLs. I cannot just replace the hostname of S3 with CloudFront.Is there a solution? Thanks!CommentSharerePost-User-6077707answered 2 months ago"
"I am currently experiencing an issue with SSH connections to my EC2 instance. Whenever I attempt to connect using SSH, I receive the following error message: "kex_exchange_identification: read: Connection reset by peer."I have verified that my SSH key pair is correctly configured and the security groups allow inbound SSH traffic on port 22. However, I am still unable to establish a successful SSH connection.I would greatly appreciate any guidance or suggestions to regain SSH access to my EC2 instances. Thanks in advance!FollowComment"
Error "Connection reset by peer" with ssh
https://repost.aws/questions/QUo74N6_CBTmqO88pMRslokQ/error-connection-reset-by-peer-with-ssh
false
"0Is it being rejected by the TCPWrapper?You may need to review the settings in "/etc/hosts.allow" and "/etc/hosts.deny" of your EC2 instance.In particular, you should check the file "/etc/hosts.deny".If your IP address is listed in this file, your connection will be denied.If SSM Agent is installed on EC2, try using Session Manager instead of SSH to connect.https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.htmlCommentShareEXPERTRiku_Kobayashianswered 14 days ago"
"Is possible to remove any kind of autostart feature on CodePipeline? I have 2 action in source stage, one from Codecommit and one from S3 and both generate automatically 2 different CloudWatch rules that trigger my pipeline. I also need to remove the autostart at resource creation, actually i'm using terraform to build the pipeline but in the documentation i didn't find anything related. Thanks for help!FollowComment"
Remove any CodePipeline Trigger that autostart the execution
https://repost.aws/questions/QUdX7jenJMRxGB1ZcSaP99PQ/remove-any-codepipeline-trigger-that-autostart-the-execution
true
"1Accepted AnswerFor CodeCommit and Amazon S3 Source actions, you are not able to disable change detection through the CodePipeline APIs, SDK, or Terraform. You could manually modify/remove the CloudWatch Events Rule that is created in the EventBridge Console, please only take this course of action if you understand the implications and take a backup.For GitHub (Version 2) Source actions, you are able to disable change detection by editing the Source configurationProperties and setting DetectChanges to false.As an alternative, you could add a Manual Approval action to a new Stage following your Source stage. See: AWS documentation on adding a manual approval step.For Terraform, here is some sample HCL code:stage { name = "Approve" action { name = "Approval" category = "Approval" owner = "AWS" provider = "Manual" version = "1" configuration { } }}CommentShareawstyleranswered 3 months ago0Can you explain what your intended outcome is? CodePipeline is meant to orchestrate events between source repositories/location and CodeBuild. If you do not want that trigger in place, what exactly do you want as the intended outcome?CommentShareawstyleranswered 3 months ago0I want a pipeline to be started manually in any case, i need it for development purposes. There is a way to remove this automatic creation of cloudwatch rules that trigger it?CommentSharerePost-User-1693629answered 3 months ago0Thanks, it's good as workaround.CommentSharerePost-User-1693629answered 3 months ago"
"I want to connect to AWS VPC using AWS Client VPN on Linux.But it seems that AWS Client VPN is only supported on AMD64 machines.For ARM CPU, how can I connect?FollowComment"
How to connect AWS VPC from ARM CPU?
https://repost.aws/questions/QUdZIWThp4RLO_B2RXDRzBQg/how-to-connect-aws-vpc-from-arm-cpu
false
"0Hi fumii,Unfortunately, at this time to use the AWS provided client for Linux, the following is required:Ubuntu 18.04 LTS or Ubuntu 20.04 LTS (AMD64 only)CommentShareAWS-User-0357409answered 4 months ago"
For our Sagemaker Notebook/studio deployments we have used lifecycle configs to turn off the app when inactive. Is this possible to apply to Canvas? Is this a supported function? is there an alternative?FollowComment
Sagemaker Canvas Lifecycle Config
https://repost.aws/questions/QU0LTBVP38RoSVvEmmmyc7xw/sagemaker-canvas-lifecycle-config
false
"0Hello, according to the service FAQ the approach to stop the session is by clicking on the Log Out button on the Canvas appCommentShareAWS-User-8651363answered a year agoEXPERTChris_Greviewed 9 months ago"
Does the AWS Backup service support deduplication and compression of data backups in a backup vault?FollowComment
AWS Backup service deduplication and compression.
https://repost.aws/questions/QUl-Y2_6A7Q9ORAx2dlUOfzg/aws-backup-service-deduplication-and-compression
false
"0No, AWS Backup service is used to centrally manage and automate backups across AWS services, does not support de-duplication and compression - AWS Backup FeaturesCommentShareAshish_Panswered 10 months ago"
I am trying to connect to an RDS db in a Control Tower created account. When I use PGAdmin or other tools to connect I get an 'internal server error' could not connect error. I opened the security group to 0.0.0.0 and no luck. When I use my management account I have no issues. Is there something about a control tower VPC that is different from the management account default vpc that is blocking an external connection?FollowComment
Control Tower RDS Connection error
https://repost.aws/questions/QUJivnV08tQKSwS6R97EE48A/control-tower-rds-connection-error
false
"0Where are you connecting to RDS from?For access from outside the VPC, public access must be configured in RDS.For access from a VPC other than the RDS, configure VPC peering and configure the route table.https://docs.aws.amazon.com/vpc/latest/peering/create-vpc-peering-connection.htmlCommentShareEXPERTRiku_Kobayashianswered a month agoKhandoor a month agoI have public access on the rds instance and am trying to connect from pgadmin on my local machine. Nothing fancy with the my set up. Tried several times with no luck. Would love the help to figure this out. I can connect with no issues from my main account, I compared the default vpc and security group in main account to the control tower account vpc and sec grp. I didn't see any difference can't really see what it could be thats causing the issue.ShareRiku_Kobayashi EXPERTa month agoWhat IP address do you see when you name resolve the RDS DNS name from your PC?For example, you can view it with the following command.nslookup RDS EndpointAlso, could you please share the full error text of your attempt to connect?Share0I do not believe control tower is playing any part.The the issue will be related to networking.Be that routing, NACLs and or security groups.Without knowing your network connectivity I would start looking there.CommentShareGary Mcleananswered a month ago"
"I am trying to create a CICD of my application that is available on Bitbucket. For this, I have created AWS CodePipeline that will deploy this app to ECS Cluster. I am trying to do this via AWS CLI. Here is my JSON file:{ "pipeline": { "roleArn": "arn:aws:iam::xxxxxxxxxxxx:role/service-role/AWSCodePipelineServiceRole-us-east-1-HubspotConnector", "stages": [{ "Name": "Source", "Actions": [{ "InputArtifacts": [], "ActionTypeId": { "Version": "1", "Owner": "AWS", "Category": "Source", "Provider": " " }, "OutputArtifacts": [{ "Name": "SourceArtifact" }], "RunOrder": 1, "Configuration": { "ConnectionArn": "arn:aws:codestar-connections:us-east-1:7xxxxxxxx3930:connection/5bxxxx2-257f-4xxxxx0-xxx3-edfdsfsdf7d672f", "FullRepositoryId": "rxxxxxh/hubspotcctorpipeline", "BranchName": "main", "OutputArtifactFormat": "CODE_ZIP" }, "Name": "ApplicationSource" }] }, { "name": "Build", "actions": [{ "inputArtifacts": [{ "name": "SourceArtifact" }], "name": "Build", "actionTypeId": { "category": "Build", "owner": "AWS", "version": "1", "provider": "CodeBuild" }, "outputArtifacts": [{ "name": "default" }], "Configuration": { "ProjectName": "cicdCli" }, "runOrder": 1 }] }, { "Name": "DeployECS", "ActionTypeId": { "Category": "Deploy", "Owner": "AWS", "Provider": "ECS", "Version": "1" }, "RunOrder": 2, "Configuration": { "ClusterName": "my-ecs-cluster", "ServiceName": "sample-app-service", "FileName": "imagedefinitions.json", "DeploymentTimeout": "15" }, "OutputArtifacts": [], "InputArtifacts": [{ "Name": "my-image" }] } ], "artifactStore": { "type": "S3", "location": "codepipeline-us-east-1-1xxx5xxxx29" }, "name": "newPipelineCicd", "version": 1 }}Here is the error I am facing:Can Someone describe me what I am doing wrong? I have searched for these errors but didn't get any help from anywhere. Also no one have written any tutorial or proper guide for this. I have found AWS documentation, one of the complexest documentations. Please guide me here.I would really appreciate that.FollowComment"
AWS CodePipeline throwing error "Missing required parameter in pipeline.stages[0]: "name""
https://repost.aws/questions/QU_HxFEaxvTFOyY0HSLmwM9Q/aws-codepipeline-throwing-error-missing-required-parameter-in-pipeline-stages-0-name
true
1Accepted AnswerThe parameter names are case sensitive for the AWS CLI input files.We recommend that you use the --generate-cli-skeleton option to generate the template with the "correct" parameter names to avoid errors.aws codepipeline create-pipeline --generate-cli-skeletonPlease refer to the CLI Skeleton Templates documentation.CommentShareDmitry Balabanovanswered 9 months ago
"Iam trying to export vmmy code is aws ec2 create-instance-export-task --instance-id i-***************** --target-environment vmware --export-to-s3-task file://C:\file.jsonfile.json code is{"ContainerFormat": "ova","DiskImageFormat": "VMDK","S3Bucket": "export-bucket-20211","S3Prefix": "vms/"}the Error isAn error occurred (NotExportable) when calling the CreateInstanceExportTask operation: The image ID (ami-***************) provided contains AWS-licensed software and is not exportable.FollowComment"
Iam trying to export vm and the error is The image ID (ami-0000000000) provided contains AWS-licensed software and is not exportable.
https://repost.aws/questions/QUnLUXN8gGSCa08Mn5NogMXQ/iam-trying-to-export-vm-and-the-error-is-the-image-id-ami-0000000000-provided-contains-aws-licensed-software-and-is-not-exportable
false
"0It looks like you are trying to export an instance which was created from image in AWS Marketplace. The other option is that your EC2 instance may contain third-party software provided by AWS. For example, VM Export cannot export Windows or SQL Server instances.Please check this document for more details - https://docs.aws.amazon.com/vm-import/latest/userguide/vmexport.html#vmexport-limitsCommentShareMaciej Malekanswered 5 months ago"
"I can successfully connect to Neptune from my local JupyterLab notebook:%%graph_notebook_config{ "host": "xxx-yyy.us-east-1.neptune.amazonaws.com", "port": 8182, "auth_mode": "IAM", "load_from_s3_arn": "", "ssl": true, "ssl_verify": true, "aws_region": "us-east-1"}I am able to run Gremlin queries that return results:%%gremling.V().limit(5)However, if I run an OpenCypher query:%%ocmatch (n) return n limit 5I get error:{'error': ProxyError(MaxRetryError("HTTPSConnectionPool(host='xxx-yyy.us-east-1.neptune.amazonaws.com', port=8182): Max retries exceeded with url: /openCypher (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 503 Service Unavailable')))"))}Any idea why Gremlin works but OpenCypher does not? Neptune versions is 1.1.1.0FollowCommentTaylor-AWS 2 months agoJust for validation, can you execute a %status command successfully? Also, what version of the graph-notebook library are you using? Based on the config output, this looks like an older version. I would suggest that you ensure you're using the latest version of graph-notebook.SharerePost-User-9844941 2 months ago%status returns error{'error': ProxyError(MaxRetryError("HTTPSConnectionPool(host='xxx.yyy.us-east-1.neptune.amazonaws.com', port=8182): Max retries exceeded with url: /status (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 503 Service Unavailable')))"))}I suspect a corporate network policy is blocking access. Not sure what to ask to unblock. What proxy is being referred to in error message?These are my package versions and dependencies:tornado==6.2graph-notebook==3.7.3jupyterlab==3.6.3Share"
Opencypher not working from local notebook
https://repost.aws/questions/QUWSDk-8UcTzK6PMg8n51rVQ/opencypher-not-working-from-local-notebook
false
"0Hello,Can you confirm if the same error is received when using the below:%%opencyphermatch (n) return n limit 5Also, does the error you receive happen intermittently or whenever you try to run it?CommentShareSUPPORT ENGINEERMohammed_RKanswered 2 months agorePost-User-9844941 2 months agoYes, same error using %%opencypher.Error occurs consistenly. Never worked.Share"
"Hi Everyone,When processing an SQS event with AWS Lambda, I want to know if the lambda function I am running (or the event source mapping more accurately) supports batch item failures. Is there a way to know from the SQS event?Knowing this would let my Lambda function return a response that is appropriate for the event source mapping.Until now I was running with an event source mapping that did not have batch item failures enabled (by mistake) and lost many messages as a result :-(Thanks,MoFollowComment"
How To Detect BatchItemFailure support from Lambda SQS Event
https://repost.aws/questions/QUie_OgcLlSayJOL5gE5-9Qg/how-to-detect-batchitemfailure-support-from-lambda-sqs-event
false
"0No. The event object does not contain any specific information to let you know if the event source mapping supports partial failure response or not. You will need either to make sure you configure your event sources correctly :) or call the get_event_source_mapping API, which I do not recommend as it will probably throttle and will add latency.CommentShareEXPERTUrianswered a year ago"
"I have registered a domain caunion-tech.com via Route 53. The hosted zone has a NS record with the below values:ns-142.awsdns-17.comns-796.awsdns-35.netns-1477.awsdns-56.orgns-1862.awsdns-40.co.ukI tried many times but the MX, TXT, CNAM, SOA records could not be resolved at all. I have verified that the NS value is correct. It completely follows the details under "Registered domain" page. Grateful if someone can advise. Many thanks.FollowComment"
Registered domain cannot be resolved
https://repost.aws/questions/QUt01IXaizSRCgy30bz6A1kA/registered-domain-cannot-be-resolved
true
"1Accepted AnswerHi THereyou mentioned thatThe hosted zone has a NS record with the below values: ns-142.awsdns-17.com ns-796.awsdns-35.net ns-1477.awsdns-56.org ns-1862.awsdns-40.co.ukHowever if I query your domain NS records, I see different ones.https://registrar.amazon.com/whois?domain=caunion-tech.comName Server: NS-123.AWSDNS-15.COMName Server: NS-1340.AWSDNS-39.ORGName Server: NS-1563.AWSDNS-03.CO.UKName Server: NS-655.AWSDNS-17.NETPlease update the NS records to match that of the hosted zone. Refer to Adding or changing name servers or glue recordsCommentShareEXPERTMatt-Banswered 2 months agoEXPERTBrettski-AWSreviewed 2 months ago0The issue you are describing can have multiple root causes. I just start with a very simple one, that you might have already checked:After registering the domain: Did you create a public or private hosted zone to create these entries? For your use case it should be a public hosted zone.It is strange because usually the name server should end with -00 for private hosted zones but if I query these server directly and ask for NS or SOA entries I get "REFUSED".CommentShareEXPERTAndreas Seemuelleranswered 2 months ago0Yes, it's the public hosted zone and the zone has been created. Don't know why the domain name cannot be resolved.Do you need further info to solve this problem?CommentSharerePost-User-2000300answered 2 months ago"
"Hello everyone,I saw the examples on GitHub on how to create an Amazon Braket Hybrid Jobs using the SDK. Since I am using a docker container I need to use boto3. But based on the description of the method 'create_job' (https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/braket.html#Braket.Client.create_job) I am not sure if I am creating an Amazon Braket Hybrid Jobs or just an Amazon Braket. Thanks in advance!FollowComment"
Create a Amazon Braket Hybrid Jobs using boto3 (python)
https://repost.aws/questions/QUyTnn9xmYS52Ks5bs2HYETw/create-a-amazon-braket-hybrid-jobs-using-boto3-python
false
"1When you use the create_job method, it creates an instance of Amazon Braket Hybrid Job. Sometime, people refer the instance of "Amazon Braket Hybrid Job" as "job" (lower case). So I think "Creates an Amazon Braket job" in the API doc you linked means creating an instance of Amazon Braket Hybrid Job.CommentShareTim Chenanswered 10 months ago"
I would like to run a node js app that uses the aws-iot-device-sdk-js-v2 as a greengrass component. I am unsure if this sdk will accept the credentials automatically from the TES?Would it then be possible to send data to AWS IOT Core Topics?Or is there a way to send data to AWS IOT Core topics using the AWS SDK for JavaScript v3?FollowComment
Using Token Exchange Service with aws-iot-device-sdk-js-v2 and greengrass
https://repost.aws/questions/QU6df2IoNCTy6h7DKC58EHig/using-token-exchange-service-with-aws-iot-device-sdk-js-v2-and-greengrass
false
"0Hi Philbegg,The aws-iot-device-sdk-js-v2 SDK is used for IoT operations which does not require TokenExchangeService component or its IAM role, but requires appropriate IoT permissions on the IoT Policy that is attached to the certificate that your IoT device has, please see https://docs.aws.amazon.com/greengrass/v2/developerguide/device-auth.html .From Greengrass Core devices, our recommended method to send data to IoT Core topics is through Greengrass IPC APIs we provide for that https://docs.aws.amazon.com/greengrass/v2/developerguide/ipc-iot-core-mqtt.html . But Greengrass IPC is not available with the JS SDK at the moment aws-iot-device-sdk-js-v2, the current supported languages are Java, Python and C++. You could either switch to one of these or if you need to keep using the JS SDK, you can still connect to IoT Core directly using it (not via Greengrass IPC) and the certificate on the device that you can find in the Greengrass root directory.The TES component and role are needed for any non IoT services e.g. S3 and to use that, you can declare a dependency on the aws.greeengrass.TokenExchangeService component inside your own component's recipe and provide relevant permissions on your Token Exchange IAM role policy.CommentShareshagupta-awsanswered 2 years ago"
"I am trying to burn in caption using Arabic caption. When I tried using SRT file containing the Arabic caption text, the resulting caption seems to be reversed compared to the original. Is this an intended behavior when burning in captions in RTL languages?Also, I tried set the fallback font setting to proportional font, but the result still came out with monospace font. I tried different settings for the font but it always came out as monospace.FollowComment"
RTL burn in caption using Elemental MediaConvert
https://repost.aws/questions/QUN8AGF0MDTIqUxafMPTq16A/rtl-burn-in-caption-using-elemental-mediaconvert
false
"Even if StartUrl is setup in "HKLM:\Software\Amazon\AppStream Client", the Windows Appstream clients prompts the user to type the URL and hit connect even though the URL is pre-populated.When StartUrl is setup in HKLM, the Windows Client should auto-connect instead of showing the URL and asking the user to hit connect. We're looking at replacing Citrix with Appstream and in a large scale deployment, this is too confusing for users. There should be a registry setting to disable the URL prompt completely and auto-connect users.Our Appstream environment user SAML with AzureAD, with Active Directory Domain-joined appstream instances.Thank you.FollowComment"
Appstream Windows Client prompts user for StartURL
https://repost.aws/questions/QU1fRwlNPySpKH9BBjyFkZcA/appstream-windows-client-prompts-user-for-starturl
false
"0Hi johnsteed -Thank you for the feedback. In the interim, one option you have is to provide your users a desktop/start menu shortcut that uses the AppStream 2.0 custom client handler with the URL value. When the user uses this shortcut to launch the application, provided the encoded URL is allowed, the native client will automatically start connecting.You can learn more about the capability here: https://docs.aws.amazon.com/appstream2/latest/developerguide/redirect-streaming-session-from-web-to-client.html. While the documentation calls out streaming URL, you can use the same URL that's in the registry key that you have created. Note that this URL needs to be base64 encoded, and include the HTTPS part.For example, you can create a desktop shortcut that executes:amazonappstream:aHR0cHM6Ly9lbmFibGVkLXVybA==. Provided the unencoded URL is allowed by the AppStream 2.0 client (either via start URL, trusted domains, or DNS lookup), the client will start connecting on launch.(If you want the shortcut to have the AppStream 2.0 icon itself, you can find it on the native client exe: %userprofile%\AppData\Local\AppStreamClient\appstreamclient.exe)Hope this helps.MuraliCommentShareEXPERTMuraliAtAWSanswered 3 years agoEddie Vev a month agoThis isn't working for me. We use OKTA for SSO, wondering if that is causing the issue.The URL in my shortcut is: amazonappstream:https://vanguardesf.oktapreview.com/home/amazon_appstream/0oa1jmmwp5hZRSmuI0h8/aln9q1lsu36TfGB4x0h7The shortcut opens the Appstream client, but the start URL from my registry is shown and it doesn't connect: https://vanguardesf.oktapreview.com/home/amazon_appstream/0oa1bx5l2brIlL9dX0h8/aln9q1lsu36TfGB4x0h7Share0Thank you. I've confirmed that it works as expected. If I hit "Disconnect" or "End Session", the Appstream client it goes back to the URL/connect screen. It might be less confusing for users if the Appstream client closes instead, at least if the starturl is defined in HKLM.CommentShareAWS-User-7557329answered 3 years ago0Hi johnsteed -I've logged that as a feedback item, as well.Thanks,MuraliCommentShareEXPERTMuraliAtAWSanswered 3 years ago"
"I'm trying to upgrade a RDS Aurora cluster via CloudFormation template but it fails with the error You must explicitly specify a new DB instance parameter group, either default or custom, for the engine version upgrade.. This error comes from the DBInstance (AWS::RDS::DBInstance) DBParameterGroupName definition. The CloudFormation template beneath is minimum test template to try out the Blue / Green deployment. It works quite well if I don't specify a DBParameterGroupName for the resource AWS::RDS::DBInstance. I do not modify the current running parameter, so I don't understand this error message. Is there any solution for this?AWSTemplateFormatVersion: '2010-09-09'Parameters: MajorVersionUpgrade: Type: String Description: Swap this between 'Blue' or 'Green' if we are doing a Major version upgrade AllowedValues: - Blue - Green EngineGreen: Description: 'Aurora engine and version' Type: String AllowedValues: - 'aurora-postgresql-10.14' - 'aurora-postgresql-11.16' - 'aurora-postgresql-12.11' - 'aurora-postgresql-13.4' - 'aurora-postgresql-13.7' - 'aurora-postgresql-14.3' EngineBlue: Description: 'Aurora engine and version' Type: String AllowedValues: - 'aurora-postgresql-10.14' - 'aurora-postgresql-11.16' - 'aurora-postgresql-12.11' - 'aurora-postgresql-13.4' - 'aurora-postgresql-13.7' - 'aurora-postgresql-14.3'Mappings: EngineMap: 'aurora-postgresql-10.14': Engine: 'aurora-postgresql' EngineVersion: '10.14' Port: 5432 ClusterParameterGroupFamily: 'aurora-postgresql10' ParameterGroupFamily: 'aurora-postgresql10' 'aurora-postgresql-11.16': Engine: 'aurora-postgresql' EngineVersion: '11.16' Port: 5432 ClusterParameterGroupFamily: 'aurora-postgresql11' ParameterGroupFamily: 'aurora-postgresql11' 'aurora-postgresql-12.11': Engine: 'aurora-postgresql' EngineVersion: '12.11' Port: 5432 ClusterParameterGroupFamily: 'aurora-postgresql12' ParameterGroupFamily: 'aurora-postgresql12' 'aurora-postgresql-13.4': Engine: 'aurora-postgresql' EngineVersion: '13.4' Port: 5432 ClusterParameterGroupFamily: 'aurora-postgresql13' ParameterGroupFamily: 'aurora-postgresql13' 'aurora-postgresql-13.7': Engine: 'aurora-postgresql' EngineVersion: '13.7' Port: 5432 ClusterParameterGroupFamily: 'aurora-postgresql13' ParameterGroupFamily: 'aurora-postgresql13' 'aurora-postgresql-14.3': Engine: 'aurora-postgresql' EngineVersion: '14.3' Port: 5432 ClusterParameterGroupFamily: 'aurora-postgresql14' ParameterGroupFamily: 'aurora-postgresql14'Conditions: BlueDeployment: !Equals [!Ref MajorVersionUpgrade, "Blue"] GreenDeployment: !Equals [!Ref MajorVersionUpgrade, "Green"]Resources: DBClusterParameterGroupGreen: Type: "AWS::RDS::DBClusterParameterGroup" Properties: Description: !Ref 'AWS::StackName' Family: !FindInMap [EngineMap, !Ref EngineGreen, ClusterParameterGroupFamily] Parameters: client_encoding: 'UTF8' DBClusterParameterGroupBlue: Type: "AWS::RDS::DBClusterParameterGroup" Properties: Description: !Ref 'AWS::StackName' Family: !FindInMap [EngineMap, !Ref EngineBlue, ClusterParameterGroupFamily] Parameters: client_encoding: 'UTF8' DBParameterGroupBlue: Type: 'AWS::RDS::DBParameterGroup' Properties: Description: !Ref 'AWS::StackName' Family: !FindInMap [EngineMap, !Ref EngineBlue, ParameterGroupFamily] DBParameterGroupGreen: Type: 'AWS::RDS::DBParameterGroup' Properties: Description: !Ref 'AWS::StackName' Family: !FindInMap [EngineMap, !Ref EngineGreen, ParameterGroupFamily] DBCluster: DeletionPolicy: Snapshot UpdateReplacePolicy: Snapshot Type: 'AWS::RDS::DBCluster' Properties: DatabaseName: 'dbupgradetest' DBClusterParameterGroupName: !If [GreenDeployment, !Ref DBClusterParameterGroupGreen, !Ref DBClusterParameterGroupBlue] Engine: !If [GreenDeployment, !FindInMap [EngineMap, !Ref EngineGreen, Engine], !FindInMap [EngineMap, !Ref EngineBlue, Engine]] EngineMode: provisioned EngineVersion: !If [GreenDeployment, !FindInMap [EngineMap, !Ref EngineGreen, EngineVersion], !FindInMap [EngineMap, !Ref EngineBlue, EngineVersion]] MasterUsername: 'user' MasterUserPassword: 'password123' Port: !If [GreenDeployment, !FindInMap [EngineMap, !Ref EngineGreen, Port], !FindInMap [EngineMap, !Ref EngineBlue, Port]] DBInstance: Type: 'AWS::RDS::DBInstance' Properties: AllowMajorVersionUpgrade: true AutoMinorVersionUpgrade: true DBClusterIdentifier: !Ref DBCluster DBInstanceClass: 'db.t3.medium'# DBParameterGroupName: !If [GreenDeployment, !Ref DBParameterGroupGreen, !Ref DBParameterGroupBlue] # <- this line / definition causes the error Engine: !If [GreenDeployment, !FindInMap [EngineMap, !Ref EngineGreen, Engine], !FindInMap [EngineMap, !Ref EngineBlue, Engine]]Here is an example of the execution order. It only works if DBParameterGroupName is not set.aws cloudformation create-stack --parameters ParameterKey=MajorVersionUpgrade,ParameterValue=Blue ParameterKey=EngineBlue,ParameterValue=aurora-postgresql-10.14 ParameterKey=EngineGreen,ParameterValue=aurora-postgresql-11.16 --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM --stack-name db-upgrade-test --template-url [path to template]Now switch to version 11.16 by changing the MajorVersionUpgrade value from Blue to Green. Other parameters are not modified.aws cloudformation update-stack --stack-name db-upgrade-test --use-previous-template --parameters ParameterKey=MajorVersionUpgrade,ParameterValue=Green ParameterKey=EngineBlue,ParameterValue=aurora-postgresql-10.14 ParameterKey=EngineGreen,ParameterValue=aurora-postgresql-11.16 --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAMNow switch to version 12.11 by changing the MajorVersionUpgrade value from Green to Blue and updating the value for EngineBlue to aurora-postgresql-12.11.aws cloudformation update-stack --stack-name db-upgrade-test --use-previous-template --parameters ParameterKey=MajorVersionUpgrade,ParameterValue=Blue ParameterKey=EngineBlue,ParameterValue=aurora-postgresql-12.11 ParameterKey=EngineGreen,ParameterValue=aurora-postgresql-11.16 --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAMFollowComment"
Major version upgrade of RDS Aurora Postgres cluster via CloudFormation with custom parameter group fails
https://repost.aws/questions/QUZ5wjdoPuS36L5rd-tp5vKQ/major-version-upgrade-of-rds-aurora-postgres-cluster-via-cloudformation-with-custom-parameter-group-fails
false
"Hi All,I am trying to deploy my nextjs web app to amplify. It works in localhost and when I tried deploying to vercel. I am using a library called chartjs-node-canvas which has a dependency on "node-canvas" npm library. When I deploy I see this error in my cloudwatch logsError: /lib64/libz.so.1: version `ZLIB_1.2.9' not foundAny idea how to fix this? My app still deploys successfully, it does not crash or anything, it just dont not include the functionality that I need from node-canvas libraru.I have tried many variations of setting the LD_LIBRARY_PATH but I must be doing it wrong.Thanks!FollowComment"
Issue Deploying to Amplify Error: ZLIB_1.2.9 not found
https://repost.aws/questions/QUa4kx1cn3R4Wu_XOLCXK39w/issue-deploying-to-amplify-error-zlib-1-2-9-not-found
false
"My customer is expanding an existing VPC by adding secondary CIDR blocks , which is in different range than original (VPC is in 10.xxx range, and expanded CIDR blocks are in 100.xx range).VPC is connected to on premises using Direct Connect.What are the changes required so that the instances in subnets under the secondary CIDR block range can communicate over direct connect and receive traffic back from on premises?FollowComment"
Secondary CIDR VPC block - Direct Connect
https://repost.aws/questions/QUaikU4WyAR-aIbehllFrYKA/secondary-cidr-vpc-block-direct-connect
true
"1Accepted AnswerIf the customer is using a Direct Connect Private VIF to terminate the Direct Connect on the Virtual Gateway in their VPC:For receiving new CIDR range on-premise, AWS would send new CIDR range in the next BGP update on the DX VIF session to customer's router. Customer does not have to make any config change.If the customer is using a Direct Connect Transit VIF to terminate the Direct Connect on Transit Gateway:They may need to modify the prefixes in Transit Gateway on the Direct Connection attachment to send the new range (100.x.x.x) to on premises.If automatic route propagation from the VPC attachment is enabled then the 100.x.x.x route will appear in Transit Gateway automatically.If automatic route propagation from the VPC attachment is disabled then the customer will need to add the 100.x.x.x route manually.On premises the customer will need to ensure that the 100.x.x.x route is accepted and added to any local routing protocols (static or dynamic).CommentShareAWS-User-3160197answered 4 years ago"
All my user on workmail earlier during the morning were unable to access their inbox as soon I accessed AWS console they were able to access their inbox. Did something happened this morning with workmail services? and it seems that this is happening from time to time.FollowComment
workmail access to inbox issues
https://repost.aws/questions/QUCsqyamWAQFaxdukIBX7FGA/workmail-access-to-inbox-issues
false
"When I deploy a Greengrass group on a GGC running in a Docker container, as far as I can tell the group exists as long as the GGC image is running, and if the GGC image is restarted the group is not present and requires redeployment. I have thus far been redeploying from the AWS IoT console, but my use scenario has the core device powered on and off with relative frequency, with no guarantee of Internet connection.I would like to find a way to have the group details and associated functions persistent between sessions, or be saved and redeployed locally.FollowComment"
Persistent or local Greengrass group deployment in Docker container
https://repost.aws/questions/QUqzFEzi3nRpKcQOKGwL3u6A/persistent-or-local-greengrass-group-deployment-in-docker-container
false
"0I solved this by using the -v tag when running docker run to bind-mount a copied greengrass/ggc/deployment folder to /greengrass/ggc/deployment, which contained the group, lambda, and mlmodel folders.CommentShareole-OGanswered 3 years ago"
Athena and redshift are incorrectly grouped under analytics. quicksight is the correct tool in that category. But redshift should be under database. https://i.stack.imgur.com/8O65F.pngFollowComment
incorrect grouping
https://repost.aws/questions/QU-dcgL91pSvWf0LLo_1T3FA/incorrect-grouping
false
"0Hi,while it is correct that Athena is an SQL engine, and Refshift it is a DataBase, they are mostly used in Analytics workloads (ad-hoc queries and analysis) and Data Warehousing and ML (Redshift ML), hence the current classification.you can also check external sources which define Analytics databases as this one. And you will notice the classification of Presto and Amazon Redshift in this category as well.hope this helps clarifying.CommentShareEXPERTFabrizio@AWSanswered a year ago0Amazon Redshift is a fully managed data warehousing service, specifically designed for online analytic processing (OLAP) and business intelligence (BI) applications. This is different from most relational databases that are focussed on row-based transactions (OLTP). The analytics use case means it is grouped with the analytics services.Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena does not manage the data, or the S3 objects the data is stored in, so is not classed as a database service.The key differences between Amazon Redshift and Postgres are described in this dochttps://docs.aws.amazon.com/redshift/latest/dg/c_redshift-and-postgres-sql.htmlThis doc goes into more detail on why columnar storage is important for analytic query performance https://docs.aws.amazon.com/redshift/latest/dg/c_columnar_storage_disk_mem_mgmnt.htmlCommentShareAndrew_Manswered a year ago"
For some reason when players leave a GameSession they are not removed when I call the function RemovePlayerSession run on Server reliable. Why?REMOVEDUPLOADThe PlayerSessionID is the same used when calling AcceptPlayerSession.#GameLiftFollowComment
[Unreal] How to remove a PlayerSession when Player leaves a GameSession?
https://repost.aws/questions/QUneZLw4vGSICsg8KUKuGZmA/unreal-how-to-remove-a-playersession-when-player-leaves-a-gamesession
false
"0Just to clarify your question, are you currently calling RemovePlayerSession in a server rpc initiated by the client? Sorry if I'm misunderstanding.CommentSharerePost-User-5100160answered 2 years ago0I don't know if this is a RPC (Remote Procedure Call), as you can see above I call a function run on the Server. I also tried to run this function on the GameMode or GameInstance.CommentSharerePost-User-7570547answered 2 years ago0It does seem like it's a remote procedure call as I took a closer look just now. My next question would be then, when the client has this WB_InGameMenu widget displayed on its screen, is the client connected to the server on GameLift yet?CommentSharerePost-User-5100160answered 2 years ago0Yes, his PlayerSession is confirmed in the backend - and it appears as if he is connected, until he logs out, while the Server keeps maintaining the PlayerSession.CommentSharerePost-User-7570547answered 2 years ago0Next question would be, do you get an error message? Or the "Player Session Removed" message?CommentSharerePost-User-5100160answered 2 years ago0I have added a print node and this is what it saysLogBlueprintUserMessages: [WB_InGameMenu_C_0] Player Session Removed Error:No error message, since it is probably not copied to the client, nevertheless an error.So the Node is accessed but the server calls error, perhaps because the player already left?I try to call a previous node which removes the PlayerPawn after the RemoveSession - will report back, perhaps in about 12 hrs.CommentSharerePost-User-7570547answered 2 years ago0I would suggest remote connecting into the instance hosting your server and checking the logs there if possible. Here's a link with more info on that if you need it.CommentSharerePost-User-5100160answered 2 years ago0Thank you will check it out.CommentSharerePost-User-7570547answered 2 years ago0Hi @REDACTEDUSERFor some reason when players leave a GameSession they are not removed when I call the function RemovePlayerSession run on Server reliable. Why?Could you please help me understand what is not working? I.e. did you call DescribePlayerSessions with the playerId (or visited the console) and saw that the player Id status is not "COMPLETED"? (See doc on all the possible player status: https://docs.aws.amazon.com/gamelift/latest/apireference/API_PlayerSession.html#gamelift-Type-PlayerSession-Status)The canonical flow is:Player disconnects from the server process, triggering an event. This depends on how your server is implemented, e.g. if you are using Websocket servers, there will be an onClose event.In the event handling, you should call RemovePlayerSession, e.g. this is a simplified version of how a websocket server would implement it. (I free typed this so there might be syntactical errors)// This event gets called when player first initiates a websocket connection with the websocket serverwebsocketServer.on('connect', (websocketConnection, request) => { logger.info(`Player connected from IP: ${request.connection.remoteAddress}!`); // Stores the player session id for this websocket connection val playerSessionId; websocketConnection.on('message', async (message) => { if (message.type === 'login') { // Client connections are expected to pass in the playerSessionId when they first connect, see: // https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-sdk-server-api.html#gamelift-sdk-server-validateplayer playerSessionId = message.attributes.playerSessionId; // This sets your player session from RESERVED to ACTIVE let outcome = await GameLiftServerSDK.AcceptPlayerSession(playerSessionId); if (!outcome.Success) { // Log error } else { // Log success // You can now call GameLiftServerSDK.DescribePlayerSessions(...) if needed to retrieve the player data } } // Handle other message types, like player movements, chat, etc. }); // This gets called when the websocket connection is terminated, e.g. player disconnects, game client crashes, network timeout, etc. websocketConnection.on('close', async () => { // This sets your player session from ACTIVE to COMPLETE let outcome = await GameLiftServerSDK.RemovePlayerSession(playerSessionId); if (!outcome.Success) { // Log error } else { // Log success } });});You should be able to call DescribePlayerSessions, either from the GameLiftServerSDK or the AWS SDK/CLI and see that the player session is now in COMPLETE statusSee doc on the flow: https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-sdk-server-api-interaction-vsd.htmlCommentSharerePost-User-7108006answered 2 years ago0Is it possible to download the AWS CLI Fleet Log after I have terminated a fleet?Can you tell me the command to download a fleet log?CommentSharerePost-User-7570547answered 2 years ago0I am using an Unreal Plugin which uses the AWS SDK, why should I call DescribePlayerSessions when knowing the PlayerSessionID?Here are the server commandsREMOVEDUPLOADAfter the player has logged into the server I see in the backend that the PlayerSession is set to Accepted, this remains after the player has left the GameSession.CommentSharerePost-User-7570547answered 2 years ago0I don't believe you can look at the server logs once the fleet is terminated since you will no longer be able to remotely connect into the fleet's instances anymore. I would suggest spinning up another fleet with one instance with one server process running, remote connect to the instance, then recreate the situation with the client, and finally look at the server logs while being remote connected in a separate rdp/ssh window.As for why you would call DescribePlayerSessions through the aws cli like @REDACTEDUSERCommentSharerePost-User-5100160answered 2 years ago0Do you perhaps know the command for downloading the log? I look at this page but have no idea what a log group name is https://docs.aws.amazon.com/cli/latest/reference/logs/get-log-events.htmlI just want the command to get the GameSession log from a Fleet.Okay found it.aws gamelift get-game-session-log --game-session-id arn:******CommentSharerePost-User-7570547answered 2 years ago0When I try to download the log with this command:aws gamelift get-game-session-log --game-session-id arn:******I get this error:An error occurred (NotFoundException) when calling the GetGameSessionLogUrl operation: No log found for game session arn:******I can see the Fleet and active GameSession in the backend, and an active PlayerSession.The game log does not show an error for the DescribeGameSessions node, but again throws an error on RemovePlayerSession.Okay so I was able to download a log after terminating the Fleet. However, the log is not in *:txt format,I don't know how to open it.Next: Going to upload a Server build in debug mode.CommentSharerePost-User-7570547answered 2 years ago0but again throws an error on RemovePlayerSession.Could you post the error message here? Akshay hopefully helped you understood why the file is not opening in this post: https://forums.awsgametech.com/t/how-to-open-a-aws-gamelift-log/11161/2CommentSharerePost-User-7108006answered 2 years ago0Hi, this is the errorGame Server LogsError: Missing file/directoryC:\game\xxxxxx\Binaries\Win64\logfile.txtThe GameSession is still ongoing, because when a player leaves, the PlayerSession remains active. If I shutdown the fleet, the error is the same.CommentSharerePost-User-7570547answered 2 years ago0Hi @REDACTEDUSERCould you confirm "Missing file/directory..." was the error message you saw after calling RemovePlayerSession? I'm asking this because I don't recall RemovePlayerSession invoke logging logics, so it's possible that the error message came from a separate thread.Did you by any chance store the output of RemovePlayerSession? Does the Success attribute equal false?Do you recognize C:\game\xxxxxx\Binaries\Win64\logfile.txt as something you specified? E.g. in logPaths attribute that you passed to ProcessReady? If so, could you change the path to something valid and if you are unblocked? Alternatively, you may also pass an empty array for logPaths in ProcessReady, or add a dummy text file in C:\game\xxxxxx\Binaries\Win64\logfile.txtCommentSharerePost-User-7108006answered 2 years ago0There is no error message from the RemovePlayerSession node.I do not recognize that I have specified this path.I have already shipped a logfile.zip/txt - the error remains the same.CommentSharerePost-User-7570547answered 2 years ago0Could you send me a sample GameSessionId where RemoveGameSession was called but player session remained ACTIVE? I can have the GameLift team take a look in the backend.GameSessionId can be retrieved by DescribeGameSession or CreateGameSession, and looks something like this: `arn:******CommentSharerePost-User-7108006answered 2 years ago0Thank you for the suggestion, but for some reason player sessions are now activated and completed on logout!I am not exactly sure what change was responsible but if someone wants a particular screenshot of my Unreal blueprint setup let me know.Thank you James and Chris for your outstanding support!Next I need to figure out how to shutdown GameSessions :)CommentSharerePost-User-7570547answered 2 years ago0Nice, glad it worked.When a game session is completed, you should call GameLiftServerSDK::ProcessEnding and promptly exit the process.In Unity, this is Application.Quit(), but I'm not sure what the equivalence is in UE. This might be helpful: https://forums.awsgametech.com/t/server-process-force-terminated-event-when-manually-scaling-down-fleet-******CommentSharerePost-User-7108006answered 2 years ago"
"I have a very small python script that returns the error.I have packed this up two ways and get the same error. I have installed the packages directly to the folder, and then zipped. And I have also created a virtual environment, installed the packages and then zipped. I still get the same result.I have had a look but couldn't find a solution to this.Does anyone know why this isn't working?Error code below:{ "errorMessage": "Cannot load native module 'Crypto.Hash._SHA256': Trying '_SHA256.cpython-37m-x86_64-linux-gnu.so': /var/task/Crypto/Util/../Hash/_SHA256.cpython-37m-x86_64-linux-gnu.so: cannot open shared object file: No such file or directory, Trying '_SHA256.abi3.so': /var/task/Crypto/Util/../Hash/_SHA256.abi3.so: cannot open shared object file: No such file or directory, Trying '_SHA256.so': /var/task/Crypto/Util/../Hash/_SHA256.so: cannot open shared object file: No such file or directory", "errorType": "OSError", "stackTrace": [ " File \"/var/lang/lib/python3.7/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n", " File \"/var/lang/lib/python3.7/imp.py\", line 171, in load_source\n module = _load(spec)\n", " File \"<frozen importlib._bootstrap>\", line 696, in _load\n", " File \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\n", " File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\n", " File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n", " File \"/var/task/main.py\", line 3, in <module>\n from coinbase.wallet.client import Client\n", " File \"/var/task/coinbase/wallet/client.py\", line 39, in <module>\n from Crypto.Hash import SHA256\n", " File \"/var/task/Crypto/Hash/SHA256.py\", line 47, in <module>\n \"\"\")\n", " File \"/var/task/Crypto/Util/_raw_api.py\", line 300, in load_pycryptodome_raw_lib\n raise OSError(\"Cannot load native module '%s': %s\" % (name, \", \".join(attempts)))\n" ]}FollowComment"
Unable to load native module
https://repost.aws/questions/QU9PReC-1FRkeY-bmSRK801Q/unable-to-load-native-module
true
"0Accepted AnswerIf you compiled on your local computer, it probably generated a different format of library than the lambda environment can use. pycrypto does not provide prebuilt wheels, so you'll need to compile it in an environment that matches what lambda runs: https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.htmlCommentShareEllisonanswered 4 years ago"
"Hello,required: Enable s3 bucket access for a specific permission set1.I have an SSO role in IAM for Billing. This is an AWS managed SSO Role and gives access to Billing Actions in its policy. AWSReservedSSO_BillingReadOnly_tagnumber.2.Have an IAM Identity Center Group, AWS-acctnum-BillingReaders-Prod, that has 4 SSO users.3. The above group has been assigned to permission sets below, user is able to see the permission sets on his login page, under the account.4. Also Have a permission set(BillingReadOnly) that has the AWS managed Billing policy- AWSBillingReadOnlyAccess and also an inline policy that allows access to s3 bucket, (ListBucket, GetObject)The SSO user who is part of group 2, sees this permission set on his login screen. But he does not see any buckets listed on s3.Note, anything that is AWS managed, cannot be altered, hence the addition of custom inline policy on the permission set.Any idea what's wrong here?Thanks in advance.FollowComment"
How can SSO users in a billing group access s3 buckets
https://repost.aws/questions/QU54OwKhS1RMC2YixZ2pZ6hQ/how-can-sso-users-in-a-billing-group-access-s3-buckets
true
"0Accepted AnswerIssue got resolved... The inline policy on the permission set, was restricting bucket by specific bucket on resource tag, and somehow this was not working. A specific bucket restriction should be added in condition by the new AWS condition tags.CommentShareSweeanswered 2 months ago0What is your S3 bucket policy look like?CommentShareNikoanswered 2 months agoSwee 2 months agoS3 bucket has basic access for AWSBillingConductor write, so that Billing can dump its monthly reports. Was advised to allow this access through IAM. On another note, had tried modifying s3 policy for that specific sso role arn, but that had not shown the bucket either. Can we add a permission set to s3 bucket policy, instead(permission sets are new to me).Share"
"Hi, I uploaded an app that took my Lightsail instance into the boostable zone at 40% (normally 0.75%, sustainable limit ~10%) for a moment until I deleted the app. Now I'm getting the UPSTREAM ERROR 515 when I try to SSH into my instance. It has been this way for ~24 hours. The instance is still running and my other apps/websites (DNS etc.) are still running.I have no attached storage or snapshots. If I take a snapshot now, will this error persist into a new instance based on that snapshot?FollowComment"
Can't connect to Lightsail SSH UPSTREAM ERROR 515
https://repost.aws/questions/QUr2LL6PPnSYqysxSIDk-VXA/can-t-connect-to-lightsail-ssh-upstream-error-515
false
"1Hello,Thank you for using Lightsail.The error code 515 tells us the instance is unreachable from our SSH service. It sounds like you've checked that your instance is within burst capacity.Did you make any changes to the open ports on your instance? Port 22 must be open to allow SSH connections.There are additional suggestions in this archived forum postIf the issue is with the SSH key installed on the instance, creating a new instance from snapshot could resolve your connection issue.Regards,GabrielCommentShareGabriel - AWSanswered a year ago0Hi there!I have found out the article that might help you with troubleshooting 515 error on Lightsail:https://aws.amazon.com/premiumsupport/knowledge-center/lightsail-resolve-ssh-console-errors/There are many reasons that may result in 515 error, so you can try to get more details regarding that:Instance boot failures, instance status check failures. or resource over-utilization on the instance.An OS-level firewall is blocking SSH port access.The default SSH port (22) is changed to a different one.The SSH service is down.CommentSharePiotr Blonkowskianswered a year ago"
"My requirements.txt file has the following--constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.5.1/constraints-3.10.txt"apache-airflow==2.5.1apache-airflow[amazon]==7.1.0apache-airflow[common.sql]==1.3.3apache-airflow[mysql]==4.0.0However, when I execute my DAG, I get an exception in the code that tries to import airflow.providers.mysql.hooks.mysql:ModuleNotFoundError: No module named 'airflow.providers.mysql'How do I get MWAA to correctly install that package?FollowComment"
MWAA getting ModuleNotFoundError: No module named 'airflow.providers.mysql'
https://repost.aws/questions/QUrSlCVL1xQlioMZZt_CO3Sg/mwaa-getting-modulenotfounderror-no-module-named-airflow-providers-mysql
false
"0You are seeing ModuleNotFoundError: No module named 'airflow.providers.mysql' which means the provided requirements are not installed . Since you used Constraint file the incompatible providers are not installed , you can find the the same in MWAA --> Scheduler log groups --> requirements_install_<worker_ip>.I suggest you to use below and try--constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.5.1/constraints-3.10.txt"apache-airflow-providers-amazon==7.1.0apache-airflow-providers-mysql==4.0.0apache-airflow-providers-common-sql==1.3.3CommentSharepriyathamanswered 25 days ago"
A customer is using Redshift in eu-west-3 and looking to use Aurora Postgresql as a DataMart downstream.They asked me if Aurora works well for such use cases.Do any of you have any experience or warning for such OLAP uses cases?ThanksFollowComment
Use Aurora Postgresql as Datamart for Redshift
https://repost.aws/questions/QUNRkmdR6cRSiHrpskj2ffEA/use-aurora-postgresql-as-datamart-for-redshift
true
"0Accepted AnswerThis can work well if the Aurora database has aggregated data that is optimized for specific query patterns and high concurrency. See the "Low Latency Hybrid Warehousing" section under Architectures for Scaling at https://github.com/aws-samples/aws-dbs-refarch-edw.Before going down this path, it is important to evaluate why they feel a separate datamart is needed to make sure we propose the right solution.CommentShareRyan_Manswered 4 years ago"
I have MenuItem table in DynamoDb which has a partition key "Id" and I want the maximum value of Id column before inserting new record. Please share the code for the same.FollowComment
Get Max Partition Key Value from DynamoDb Table
https://repost.aws/questions/QUtU_mhuDoRM-Z3YoqIrCTaA/get-max-partition-key-value-from-dynamodb-table
false
"0You'd first need a Global Secondary Index with Id as the sort key value, and a static value as the partition key:GSI_PKGSI_SKData1ID_01Data1ID_02Data1ID_03Data1ID_14DataThen you would Query your index with ScanIndexForward=False (DESC order) with a Limit=1 (return biggest ID). var client = new AmazonDynamoDBClient(); var request = new QueryRequest { TableName = "Table_Name", IndexName = "Index_Name", KeyConditionExpression = "#pk = :pk", ExpressionAttributeValues = new Dictionary<string, AttributeValue> { { ":pk", new AttributeValue { N = "1" } } }, ExpressionAttributeNames = new Dictionary<string, string> { { "#pk", "GSI_PK" } }, ProjectionExpression = "GSI_SK", Limit = 1 }; var response = await client.QueryAsync(request); Console.WriteLine(response);CommentShareEXPERTLeeroy Hannigananswered 12 days ago-1You could scan the whole table, looking for the max value, but I'm assuming you want to avoid that. The best option I can think of is to add a Global Secondary Index (GSI) to your table that has "Id" as a Sort (Range) key, and query that index, e.g.:# Use the Query operation to retrieve the first item in the index in descending orderresponse = dynamodb.query( TableName=table_name, IndexName=index_name, Select='SPECIFIC_ATTRIBUTES', AttributesToGet=[partition_key_name], Limit=1, ScanIndexForward=False)# Extract the maximum partition key value from the responsemax_partition_key_value = response['Items'][0][partition_key_name]You may need a dummy Hash (Partition) key for the GSI, always set to the same value, if nothing else suits your use case.CommentShareEXPERTskinsmananswered 4 months agoAtif 3 months agoI have created a GSI as per the above suggestion but when I am executing above I get the below errorvar blogRequest = new QueryRequest{TableName = "MenuItem",IndexName = "CategoryId-Id-index",Select = "SPECIFIC_ATTRIBUTES",AttributesToGet = { "Id" },ScanIndexForward = false,Limit = 1,}; var result = client.QueryAsync(blogRequest).GetAwaiter().GetResult();Error:Amazon.DynamoDBv2.AmazonDynamoDBException: 'Either the KeyConditions or KeyConditionExpression parameter must be specified in the request.'I am using C# languageShare"
"i am trying to understand how aws based suricata rules work. With these two rules below, all websites are working and i expect only for google.com to work. Am i missing any thing ? i understand that the order is pass, and then drop. i added the drop tcp with flow so tls.sni will be evaluated and the pass rule will work. It seems like it is working BUT i expected all other sites that don't match to not work ? (i have tried the DOMAIN LIST rule and that too doesn't work)NOTE - default order is in use, no stateless rules, forwarding frag and no frag packets is configured, INT network forward to FW SUBNET and then to the NAT SUBNET which then forward to IGW. HOME_NET is the VPC CIDR and EXTERNAL_NET is 0.0.0.0/0Rule 1pass tls $HOME_NET any -> $EXTERNAL_NET any (tls.sni; content:".google.com"; nocase; endswith; msg:"pp-Permit HTTPS access"; sid:1000001; rev:1;)Rule 2drop tcp $HOME_NET any -> $EXTERNAL_NET any (flow:established,to_server; msg:"pp-Deny all other TCP traffic"; sid: 1000003; rev:1;)FollowComment"
AWS firewall suricata rules not working as expected
https://repost.aws/questions/QUKgFkJ2URQDeEu1qasNdIJA/aws-firewall-suricata-rules-not-working-as-expected
true
"0Accepted AnswerFYI, this was resolved.In case any body is interested - This happened to be a routing issue. The NAT gateway subnet routing table had to include a return path explicitly via the firewall (gateway load balancer vpce-xxxx) entry. What's more troubling is that there is a lack of troubleshooting techniques and no mention in any documentation. I found one doc but that seems to suggest this is not required as the NAT gateway typically return the traffic from the same source it has received / which is not true.CommentSharepatilpanswered 2 months agoEXPERTTushar_Jreviewed 2 months ago"
"I have RDS MYSQL DB and using hibernate and c3p0 to connect. I am getting the below java stacktrace:org.hibernate.exception.JDBCConnectionException: The last packet successfully received from the server was 255,335,959 milliseconds ago. The last packet sent successfully to the server was 255,335,963 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:131)at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:49)at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125)at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110)at org.hibernate.engine.jdbc.internal.proxy.AbstractStatementProxyHandler.continueInvocation(AbstractStatementProxyHandler.java:129)at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81)at com.sun.proxy.$Proxy121.executeQuery(Unknown Source)at org.hibernate.loader.Loader.getResultSet(Loader.java:1953)at org.hibernate.loader.Loader.doQuery(Loader.java:829)at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:289)at org.hibernate.loader.Loader.doList(Loader.java:2438)at org.hibernate.loader.Loader.doList(Loader.java:2424)at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2254)at org.hibernate.loader.Loader.list(Loader.java:2249)at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:470)at org.hibernate.hql.internal.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:355)at org.hibernate.engine.query.spi.HQLQueryPlan.performList(HQLQueryPlan.java:195)at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1248)at org.hibernate.internal.QueryImpl.list(QueryImpl.java:101)Somehow the connection is getting lost. Is there somehow I can increase the wait_timeout or any other configuration on RDS MYSQL DB side? How do I track what is happening and what would be the possible reason/solution be? Please pour in any inputs which you have!FollowComment"
org.hibernate.exception.JDBCConnectionException: Communications link failure
https://repost.aws/questions/QUwnzdira5R5acTxFj9uDOJQ/org-hibernate-exception-jdbcconnectionexception-communications-link-failure
false
"Instance Type: r6g.12xlarge.searchNumber of nodes: 2Storage type: EBSEBS Volume Type: SSD - gp2EBS volume size: 80GBDedication m node-enabled: NoUltrawarm Storage: NoWe kicked off the opensech upgrade on 7/12/2022, and as of 10:20am on 9/12/2022 the status is still showing as;{"UpgradeStep": "PRE_UPGRADE_CHECK","StepStatus": "IN_PROGRESS","UpgradeName": "Upgrade from OpenSearch_1.3 to OpenSearch_2.3"}Via the cli, and in the console the status shows as "Check Eligibility" at 50%. This has been stuck at 50% for the past 36 hours, and from what I've read online and in the AWS documentation, this cannot be canceled without intervention from the backend team, and that support is only available with pro support plans. If this is true then this is ridiculous. I'm all for paying for support when you don't know how to fix a problem, but not when the functionality to fix the issue isn't available to us as admins.The cancel-service-software-update cli argument doesn't work either there are technically no pending scheduled updates planned.The issue is that we need to increase the EBD storage capacity to 800Gb for a large data upload but this cannot be done whilst the upgrade is stuck so this is now stalling productivity.If this cannot be resolved then we need to look at hosting our own elasticsearch dashboards where we have full control, without having to pay $1,000+ for a month of support to simply cancel a migrationHow can I cancel this software upgrade and increase the EBS storage?FollowComment"
Opensearch 1.3 > 2.3 upgrade stuck - PRE_UPGRADE_CHECK
https://repost.aws/questions/QUeBdBYnZzTs27f6t852qhmg/opensearch-1-3-2-3-upgrade-stuck-pre-upgrade-check
false
"I'm using one of the images listed here https://github.com/aws/deep-learning-containers/blob/master/available_images.md, to create an model such that I can tie that up with a sagemaker serverless endpoint , but I keep getting "failed reason: Image size 15136109518 is greater that suppported size 1073741824" . this work when the endpoint configuration is not serverless. is there any documentation around image/container size for aws managed images?FollowComment"
How to check/determine image/container size for aws managed images ?
https://repost.aws/questions/QU35dVp2D9SKKUnnVYGw9Z7A/how-to-check-determine-image-container-size-for-aws-managed-images
false
"1It sounds like you set up a serverless endpoint with 1GB of memory and the image is larger than that. You can increase the memory size of your endpoint with the MemorySizeInMB parameter, more info in this documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints-create.html#serverless-endpoints-create-configIf you pick a larger value for that (e.g. 4096 MB) then it should hopefully work.CommentShareHeikoanswered a year agoclouduser a year ago@Heiko - thanks, I tried with the max as well , i.e. 6 GB. I still get same error message.Shareclouduser a year ago@Heiko - also , when i create the endpoint configuration as Provisioned instead of serverless , it doesn't complain about the image size.ShareHeiko a year agoI just realised (it was hard to see without the thousand separators) that the image you're pulling is close to 16GB (I initially thought it was 1.6GB). Because it is 16 GB, even a config with 6GB memory won't be enough. It also makes sense that a provisioned instance doesn't complain as a provisioned instance has much more memory than a serverless endpoint.Can I ask the reason why you try to pick the image manually? Just asking because the Sagemaker API can pick the right image for you: https://sagemaker.readthedocs.io/en/stable/api/utility/image_uris.htmlShareHeiko a year agoExample:region = boto3.session.Session().region_nameimage_uri = sagemaker.image_uris.retrieve( framework='huggingface', base_framework_version='pytorch1.7', region=region, version='4.6', py_version='py36', instance_type='ml.m5.large', image_scope='inference')ShareHeiko a year agoAnd here is an example notebook that might be helpful: https://github.com/marshmellow77/nlp-serverless/blob/main/1_model_train_deploy.ipynbShare"
"Hello,I am working on a project and I need to get object from a bucket s3 from an ec2 using curl. The bucket is crypted with sse-kms.The specificity is that I can’t add header to my request so I can’t put a signature v4 in it. I encounter a signature problem when trying to get object that I don’t get when I use sse-s3.So my question is, is it possible to get object from sse-kms encrypted object using curl without header ?FollowComment"
Curl on a bucket crypted with sse-kms without header
https://repost.aws/questions/QUpa3sJyNzRxCI-Lvo_SvGRw/curl-on-a-bucket-crypted-with-sse-kms-without-header
true
"2Accepted AnswerWhat about using a presigned URL to download with query parameters?https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.htmlThis allows you to set it as a parameter in the URL, not in the header.CommentShareEXPERTRiku_Kobayashianswered a month agoEXPERTkentradreviewed a month ago"
"Hi all,I faced an strange scenario in a simple query on Redshift and i'm trying to understand if this is a bug or normal (i dont think so) behavior:My query (for example only):SELECT *FROM tableWHERE id = 'UUID'ORDER BY date DESCLIMIT 1This query, for some unknown reason are returning all records matching with WHERE clause but discarding order by and limit clauses.Even on "explain" from query editor it is not being considered.However if change the query to something like this:SELECT *FROM tableWHERE id LIKE 'UUID'ORDER BY date DESCLIMIT 1Basically changing the operator "=" per "LIKE" in WHERE clause my query works fine, the response returns only a specified limited records ordered correctly.Some hints: this strange behavior only happened in my tests when the parameter in WHERE clause is a string DistKey with UUID contentFollowComment"
Redshift Order BY LIMIT being ignored from query
https://repost.aws/questions/QUwWGNiqWhRMinkVvfPaoyqQ/redshift-order-by-limit-being-ignored-from-query
false
0I could not replicate this behavior on my table. Could it be something very specific to your table and your data? Maybe best to open a support ticket and see what they find.CommentShareMilindOkeanswered 11 days ago
"I have a private VPC with a VPC endpoint to Secrets Manager and a rotation function. Secrets manager is able to invoke the function, but the function can only intermittently communicate with secrets manager. I can see it calls the Secrets Manager API a number of times successfully, but after calling GetRandomPassword it just says "Resetting dropped connection".For full details see the following post:https://stackoverflow.com/questions/71807653/secrets-manager-rotation-timeoutFollowComment"
Secrets Manager rotation intermittent timeout
https://repost.aws/questions/QUt0En1yA-SxiDDBOIHdoCkQ/secrets-manager-rotation-intermittent-timeout
false
"For one of our On Demand EC2 "r5.large" instance, which is currently hosting SLES15 SP1; we have activated couple of SUSE Modules/Extensions which we are not able to Deregister as the Error says that "SUSEConnect error: SUSE::Connect::UnsupportedOperation: De-registration is disabled for on-demand instances. Use registercloudguest --clean instead."Please guide us on how we can deregister the same for On Demand EC2 instance.FollowComment"
Deregister the SLES15 SP1 Module for On Demand EC2 instance.
https://repost.aws/questions/QUnUww0MWlQ1qGlGoHmGYsrw/deregister-the-sles15-sp1-module-for-on-demand-ec2-instance
false
"I just started trying AWS. I have 2 EC2 instances running. One is LinuxBastion and the other is ibm-mq. I can use Putty on my Windows laptop to SSH into LinuxBastion. According to document, I have to use agent forwarding to SSH from LinuxBastion to ibm-mq because it is in the private subnet.On my LinuxBastion session, I got error "Permission denied (publickey)". Console output is shown below.[ec2-user@ip-10-0-149-123 ~]$ ssh -v -A 10.0.54.158OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 58: Applying options for *debug1: Connecting to 10.0.54.158 [10.0.54.158] port 22.debug1: Connection established.debug1: identity file /home/ec2-user/.ssh/id_rsa type 1debug1: key_load_public: No such file or directorydebug1: identity file /home/ec2-user/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/ec2-user/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/ec2-user/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/ec2-user/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/ec2-user/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/ec2-user/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/ec2-user/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_7.4debug1: Remote protocol version 2.0, remote software version OpenSSH_7.6p1 Ubuntu-4ubuntu0.5debug1: match: OpenSSH_7.6p1 Ubuntu-4ubuntu0.5 pat OpenSSH* compat 0x04000000debug1: Authenticating to 10.0.54.158:22 as 'ec2-user'debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: algorithm: curve25519-sha256debug1: kex: host key algorithm: ecdsa-sha2-nistp256debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: nonedebug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: nonedebug1: kex: curve25519-sha256 need=64 dh_need=64debug1: kex: curve25519-sha256 need=64 dh_need=64debug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug1: Server host key: ecdsa-sha2-nistp256 SHA256:10R5udxzE60Uxw4p2pxVQOKm1NHt2IILwkATTqFwOdodebug1: Host '10.0.54.158' is known and matches the ECDSA host key.debug1: Found key in /home/ec2-user/.ssh/known_hosts:1debug1: rekey after 134217728 blocksdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: SSH2_MSG_NEWKEYS receiveddebug1: rekey after 134217728 blocksdebug1: SSH2_MSG_EXT_INFO receiveddebug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>debug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickeydebug1: Next authentication method: publickeydebug1: Offering RSA public key: /home/ec2-user/.ssh/id_rsadebug1: Authentications that can continue: publickeydebug1: Trying private key: /home/ec2-user/.ssh/id_dsadebug1: Trying private key: /home/ec2-user/.ssh/id_ecdsadebug1: Trying private key: /home/ec2-user/.ssh/id_ed25519debug1: No more authentication methods to try.Permission denied (publickey).FollowComment"
Error SSH from LinuxBastion to EC2 instance running IBM-mq
https://repost.aws/questions/QUKfjQElGGT66SfmkxIpvGSg/error-ssh-from-linuxbastion-to-ec2-instance-running-ibm-mq
false
"0have you set you key pair in you instance ?https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.htmland allow connect from you windows to you ec2 instances at security gorup or acl or firewall ?CommentSharelianswered a year ago0Yes, I did. The key-pair name is KEY-PUTTY-US-E2 That is what I use to Putty/SSH into the LinuxBastion instance. When I display the instances, the column "Key Name" shows KEY-PUTTY-US-E2 for both the LinuxBastion and ibm-mq.Here is the area that I am not clear. If I do SSH set up on on normal linux servers. I generate my key and copy the key to the target server. Here it seems that AWS did the key copy work already. Perhaps I misunderstood what the "key Name" column means.CommentSharerePost-User-1757982answered a year ago"
"Hello all,Our environment contains:AWS EventBridge event to run an AWS Fargate task once an hour;AWS ECS event and lambda to monitor task status changesLately, we sometimes see that the AWS Fargate task fails to start due to the stopReason:Rate limit exceeded while preparing network interface to be attached to instanceI have checked our quotas, and we are very far (<50 ENIs out of 5000 per region).What is the cause for this error?Is a retry (the lambda to RunTask the task) a good workaround?Thank you in advance,Boris.FollowComment"
A fargate task schedule via EventBridge fails to launch sometimes with stopReason "Rate limit exceeded while preparing network interface to be attached to instance"
https://repost.aws/questions/QUmziw6bIIQzePSwXg7IepyA/a-fargate-task-schedule-via-eventbridge-fails-to-launch-sometimes-with-stopreason-rate-limit-exceeded-while-preparing-network-interface-to-be-attached-to-instance
false
"0It sounds as though perhaps there's something in your account/region, unknown to you so far, that is manipulating ENIs frequently. I would recommend enabling CloudTrail and checking the activity logs for EC2 AttachNetworkInterface actions. A high request rate could result in the symptom you're describing.CommentShareEXPERTMichael_Fanswered a year agoRodney Lester a year agoYou can then make a request for a limit increase via support.ShareBoris Figovsky a year agoI have CloudTrail enabled with API access set to All, and the AttachNetworkInterface event is missing.In the RunTask event with the EventBridge identity, I do see the response: the task's launchType is FARGATE and the ENI's state is PRECREATED.Share"
"I am looking for a aws managed solution to redirect internet users to VPC based opensearch kibana dashboard.I have tried with App. Loadbalancer & IP based target group pointed to Opensearch ENI's private ip. And used Lambda & Cloudwatch event to keep monitor on ip change and update the target group ip's. It worked.However, Is there any other solution available in AWS which is highly available and redirect internet users to Opensearch kibana endpoint.FollowComment"
Access VPC Opensearch from Internet | Without nginx proxy | required AWS managed Service based solution
https://repost.aws/questions/QUBMYQJqZtQGyxZvQdEWwU6Q/access-vpc-opensearch-from-internet-without-nginx-proxy-required-aws-managed-service-based-solution
false
"0Can't it be OpenSearch set up for public access?CommentShareEXPERTRiku_Kobayashianswered 2 months agoBabuji R 2 months agono as per compliance it has to be inside vpc. but some users via internet they need access to kibana dashboard. the internet users wont use vpn's.ShareRiku_Kobayashi EXPERT2 months agoWhat about using a Systems Manager Session Manager proxy to access EC2 as a stepping stone?https://repost.aws/knowledge-center/systems-manager-ssh-vpc-resourcesUsing this configuration, you can access OpenSearch in the VPC from the EC2 on the trestle.Share0For internet user to access VPC based opensearch we did the followingcreated alb in public subnetcreate r53 cname mapping with albCreate target group with IP basedusing event bridge (createNetworkInterface & DeleteNetworkInterface) & lambda(python) we were able to query the ENI's and update the IP's in Target group.With the above approach internet users able to access the vpc based opensearchCommentShareBabuji Ranswered a month ago"
working on an Alexa Skill and I want to share with my teams the Alexa console together with its source code in Lambda and also an S3 bucket. How do I do that on AWS?FollowComment
"How do you share AWS Lambda functions of S3 Buckets to your teams, so they"
https://repost.aws/questions/QU76hvM-kVQiGc6eV8MIhtwA/how-do-you-share-aws-lambda-functions-of-s3-buckets-to-your-teams-so-they
true
0Accepted AnswerYou can create new user and assign policy from I&AM section as you want.I&AM > USER > GROUP > Attach policyFor alexa console you can give access to user to edit your skill from developer console (amazon developer).Add user as a contributor so he can edit skill from his console onlyCommentShareharshmanvaranswered 4 years ago
"Hi folks,we're refactoring our Landing-Zone and I wonder if 2 AZs are sufficient for VPC? Why?Many thanks.FollowComment"
2 AZs vs 3 or more AZs
https://repost.aws/questions/QU6ndN_tPYR9K9NoXGA2chBw/2-azs-vs-3-or-more-azs
false
"3It's hard to answer this question without your application context and without knowing your reliability expectations - but in general terms, our best practice is to use all Availability Zones in a region.The more AZs, the smaller the failure domain - and the easier to deal with availability events. A counter point might be on additional costs, like cross-az traffic: so this should be factored in.CommentShareGiorgio@AWSanswered a year ago0I'd say 3 AZs minimum; that way if the worst happens and there's an AZ outage, you still have redundancy. Also the more AZs you use the greater the chance that your auto-scale groups etc will find available capacity when an AZ outage causes a rush for resources.CommentShareEXPERTskinsmananswered a year ago0There is also a cost component to this: If you are running in 3 AZs and one AZ is removed, you only have to accept 50% increase in load on the other 2 AZs. If you run in 2 AZs, then you get 100% load increase, so you have to have a larger capacity and higher cost to absorb the traffic.CommentShareRodney Lesteranswered a year ago"
"Can an existing SG be deployed using SSM Powershell to all EC2 instances? If it is possible, then would like to then remove the same group after done using it. I found little information on how to accomplish this.FollowComment"
Can a SG be deployed using SSM/PS to all Ec2 instances?
https://repost.aws/questions/QU2JzZttliSUqxL-71MU_XJA/can-a-sg-be-deployed-using-ssm-ps-to-all-ec2-instances
true
"1Take a look at this: Invoking other AWS services from a Systems Manager Automation runbook.Instead of running a script, you have SSM run the modify-instance-attribute API call against a set of instances.CommentShareEXPERTkentradanswered a year agoMonica_Lluis SUPPORT ENGINEERa year agoThank you for the answer, kentrad.AWS-User-8719565, let us know if this answers your question. If this solved your issue, please remember to click on the "Accept" button to let the community know that your question is resolved. This helps everyone. Thank you in advance.Share0Accepted AnswerThank you for your answer. I am really new to AWS, so it will take some time to figure out how to do it. Thanks again for your response Kentrad. It puts me in the right direction.CommentShareAWS-User-8719565answered a year ago"
"I'd like to use a device's Thing name in the MQTT topic subscription for a Greengrass Lambda, e.g.:{ "topic": "hello/${AWS_IOT_THING_NAME}", "type": "IOT_CORE"}This question was asked a year ago and the guidance was that this sort of configuration was not possible at that time, and instead one might setup the subscription manually within a pinned Lambda or custom component: https://repost.aws/questions/QU5dTq6p3mTsyT5dFPMtWvGA/v-2-use-thing-name-in-mqtt-topic-for-triggering-lambdaIs that still the case today? Alternatively, would it be possible to use the configuration merge feature to modify the Lambda subscription to a device-specific topic after deployment or some other simple workaround?Apologies for the duplicate question if indeed the guidance is still the same as before!FollowComment"
Greengrass V2 Lambda: Use Thing name in MQTT topic subscription
https://repost.aws/questions/QUaxGQ5aJVTZeuyODMPfsXUA/greengrass-v2-lambda-use-thing-name-in-mqtt-topic-subscription
true
"1Accepted AnswerHi there,We currently plan to support iot:thingName recipe variable within configuration section in Nucleus v2.6.0CommentShareLihaoanswered a year agoAWS-User-5344994 a year agoGreat, thanks!ShareAWS-User-5344994 a year agoTwo followup questions: Will iot:groupName be supported as well? And is there a planned release date for v2.6.0?ShareLihao a year agoSorry, it won't. Only the variable here (Recipe variables section) will be support in configuration section (without component_dependency_name:configuration:json_pointer)ShareAWS-User-5344994 a year agoOk, good to know, thanks!Share"
"Currently we use EKS with managed node groups. In the node group configuration you have to configure the min, max and desired capacity of the nodes.With the use of the classic Cluster Autoscaler the desired node capacity of the autoscaling group will be updated. Now we switched to the AWS developed autoscaler Karpenter (https://karpenter.sh/). Karpenter has a different handling and does not work with the autoscaling group or node group.So my question is, with the use of Karpenter, which is the best setting for the EKS min, max and desired capacity? Should we set all these to "0" for example?FollowComment"
EKS node capacity settings with Karpenter
https://repost.aws/questions/QUkSinVy1dSMa2IbkFOAbsrg/eks-node-capacity-settings-with-karpenter
false
"1Karpenter doesn't actually scale out nodes within the Managed Node Group. While you can use Karpenter alongside managed node groups, Karpenter defines its own set of limits (provisioner.spec.limits.resources). Kube scheduler will schedule to existing nodes in a managed node group if they exist. But when scaling, it will spin up new nodes that are not part of any Autoscaling Group or Managed Node Group. Hopefully that helps!CommentShareJeff_Wanswered a year agoAbhi a year ago+1Karpenter by design is different than cluster autoscaler. It aims at rapid scale-out for unscheduled pods and also quick scale-in (ttl based) when nodes are empty. The focus is now back on making compute available on-demand and fast.Sharekennyk a year agoIdeally, Karpenter should work in harmony with existing node groups (managed or not) by manipulating ASGs directly rather than creating unmanaged instances. Perhaps an alternate Provisioner could be developed to do this. It would be less complex than the existing Provisioner since there would be no need for subnetSelector, securityGroupSelector, InstanceProfile, etc. - all can be determined directly from the existing ASG and LaunchTemplate.Shareeptiger a year ago@kennyk It's actually not quite so easy to work with existing node groups because Karpenter would essentially need to duplicate the logic that Cluster Autoscaler already has to determine which group to scale up and all the additional logic around monitoring that group (which does essentially what you described). There are notable performance improvements in a group-less model, Justin does a better job than I of explaining it https://www.youtube.com/watch?v=3QsVRHVdOnMShare"
"Minimal latency for EFS OPEN(new file), WRITE(1 byte), RENAME and REMOVE?Thanks in advance for any help with this. I am evaluating EFS for use with an existing proprietary technology stack. Within the system there are many shards that each correspond to a database. When these databases are first opened there are (currently) several small files created, renamed and removed. The requirements are for each shard to be opened quickly, so ideally in under 50ms 95% of the time. I have noticed high latency with such operations when testing on EFS and am now wondering how to obtain minimal latency?I am testing with m4.10xlarge instance type in us-east-1d (using EFS DNS to mount in the same availability zone). I am in a VPC, could the VPC be adding latency?Model vCPU* Mem (GiB) Storage Dedicated EBS Bandwidth (Mbps) Network Performancem4.10xlarge 40 160 EBS-only 4,000 10 GigabitRunning amzn-ami-hvm-2018.03.0.20181129-x86_64-gp2 (ami-0080e4c5bc078760e). I started with a RHEL7.6 AMI but switched.I have tested EFS throughput modes provisioned 1024 MiB/s and Bursting, performance mode Max I/O and General Purpose (I read that MaxIO can have higher latency and I have observed this). All with 1.2TB of files on the filesystem and in the case of Bursting, plenty of Burst Credit Balance. Testing without encrytion at rest. Mount options are the default from the "Amazon EC2 mount instructions (from local VPC)", NFS client, so: mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-XXXX.efs.us-east-1.amazonaws.com:/ /efsTesting without ssl. What (NFS RTT) latency figures to expect?So far, after 10 runs of the command below, I am seeing the 1 byte write (to a new file) client NFS RTT is around 10 millisenconds, with the open, rename and remove all being between 5ms to 8ms:This is on a 1024 MiB/s PIOPS General Purpose EFS. mountstats /efs | egrep -B 3 'RTT: (([1-9][0-9])|([3-9]))' --no-group-separatorWRITE: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 328 avg bytes received per op: 176 backlog wait: 0.012500 RTT: 10.775000 total execute time: 10.803125 (milliseconds)OPEN: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 404 avg bytes received per op: 460 backlog wait: 0.009375 RTT: 7.390625 total execute time: 7.456250 (milliseconds)REMOVE: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 288 avg bytes received per op: 116 backlog wait: 0.003125 RTT: 6.390625 total execute time: 6.431250 (milliseconds)RENAME: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 440 avg bytes received per op: 152 backlog wait: 0.009375 RTT: 5.750000 total execute time: 5.771875 (milliseconds)This is on a 1024 MiB/s PIOPS MaxIO EFS. mountstats /efs | egrep -B 3 'RTT: (([1-9][0-9])|([6-9]))' --no-group-separatorWRITE: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 328 avg bytes received per op: 176 backlog wait: 0.012500 RTT: 13.746875 total execute time: 13.775000 (milliseconds)OPEN: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 404 avg bytes received per op: 460 backlog wait: 0.009375 RTT: 27.175000 total execute time: 27.196875 (milliseconds)REMOVE: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 288 avg bytes received per op: 116 backlog wait: 0.003125 RTT: 19.465625 total execute time: 19.515625 (milliseconds)RENAME: 320 ops (12%) 0 retrans (0%) 0 major timeouts avg bytes sent per op: 440 avg bytes received per op: 152 backlog wait: 0.012500 RTT: 19.046875 total execute time: 19.068750 (milliseconds)Testing with this command:export DIR=/efs ; rm $DIR/*.tmp ; ( time bash -c 'seq 1 32 | xargs -I {} bash -c "time ( dd if=/dev/zero of=$DIR/{}.tmp bs=1 count=1 conv=fdatasync ; mv $DIR/{}.tmp $DIR/mv{}.tmp ; rm $DIR/mv{}.tmp )" ' ) 2>&1 | grep realIs this around what to expect from EFS? Can anything be done to lower this latency?I have read https://docs.aws.amazon.com/efs/latest/ug/performance.html"The distributed nature of Amazon EFS enables high levels of availability, durability, and scalability. This distributed architecture results in a small latency overhead for each file operation. Due to this per-operation latency, overall throughput generally increases as the average I/O size increases, because the overhead is amortized over a larger amount of data. Amazon EFS supports highly parallelized workloads (for example, using concurrent operations from multiple threads and multiple Amazon EC2 instances), which enables high levels of aggregate throughput and operations per second."Can also be seen with this mainprog:// Compile with: g++ -Wall -std=c++11 test.cc#include "stdio.h"#include <string.h>#include <errno.h>#include <unistd.h>#include <sstream>#include <vector>#include <iostream>#include <chrono>int main(int argc, char* argv[]){ std::vector<std::string> args(argv, argv + argc); if (args.size() != 2){ std::cout << "Usage: " << args[0] << " dir_path" << std::endl; return 1; } for(int i=1;i<32;i++){ std::ostringstream oss_file; std::ostringstream oss_file_rename; oss_file << args[1] << "/test_" << i << ".tmp"; oss_file_rename << args[1] << "/test_" << i << "_rename.tmp"; FILE *fptr; auto start = std::chrono::system_clock::now(); auto start_for_total = start; fptr = fopen(oss_file.str().c_str(), "w"); auto stop = std::chrono::system_clock::now(); if( NULL == fptr ){ printf("Could not open file '%s': %s\n", oss_file.str().c_str(), strerror(errno)); } printf("time in ms for fopen = %3ld ",std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count()); start = std::chrono::system_clock::now(); if( write( fileno(fptr), "X",1 ) <= 0 ){ printf("Could not write to file '%s': %s\n", oss_file.str().c_str(), strerror(errno)); } stop = std::chrono::system_clock::now(); printf("write = %3ld ",std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count()); start = std::chrono::system_clock::now(); if( 0 != fdatasync( fileno(fptr) )){ printf("Could not fdatasync file '%s': %s\n", oss_file.str().c_str(), strerror(errno)); } stop = std::chrono::system_clock::now(); printf("fdatasync = %3ld ",std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count()); start = std::chrono::system_clock::now(); if( 0 != fclose(fptr)){ printf("Could not fclose file '%s': %s\n", oss_file.str().c_str(), strerror(errno)); } stop = std::chrono::system_clock::now(); printf("fclose = %3ld ",std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count()); start = std::chrono::system_clock::now(); if( 0 != rename(oss_file.str().c_str(), oss_file_rename.str().c_str())){ printf("Could not rename file '%s' to file '%s': %s\n", oss_file.str().c_str(), oss_file_rename.str().c_str(), strerror(errno)); } stop = std::chrono::system_clock::now(); printf("rename = %3ld ",std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count()); start = std::chrono::system_clock::now(); if(unlink(oss_file_rename.str().c_str())!=0){ printf("Could not unlink file '%s': %s\n", oss_file.str().c_str(), strerror(errno)); } stop = std::chrono::system_clock::now(); printf("unlink = %3ld total = %3ld\n",std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count(), std::chrono::duration_cast<std::chrono::milliseconds>(stop - start_for_total).count()); }}On the EBS SSD /tmp filesystem:> time ./a.out /tmptime in ms for fopen = 0 write = 0 fdatasync = 3 fclose = 0 rename = 0 unlink = 0 total = 3time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 3 fclose = 0 rename = 0 unlink = 0 total = 3time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2time in ms for fopen = 0 write = 0 fdatasync = 3 fclose = 0 rename = 0 unlink = 0 total = 3time in ms for fopen = 0 write = 0 fdatasync = 2 fclose = 0 rename = 0 unlink = 0 total = 2real 0m0.086sOn EFS General Purpose with 1024 MiB/s provisioned IOPS:> time ./a.out /efs_gp_1024piopstime in ms for fopen = 12 write = 0 fdatasync = 10 fclose = 0 rename = 7 unlink = 5 total = 37time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 0 rename = 7 unlink = 5 total = 32time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 1 rename = 13 unlink = 9 total = 42time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 1 rename = 9 unlink = 7 total = 37time in ms for fopen = 7 write = 0 fdatasync = 12 fclose = 2 rename = 11 unlink = 6 total = 40time in ms for fopen = 10 write = 0 fdatasync = 13 fclose = 4 rename = 11 unlink = 5 total = 46time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 0 rename = 20 unlink = 5 total = 44time in ms for fopen = 8 write = 0 fdatasync = 15 fclose = 6 rename = 14 unlink = 7 total = 52time in ms for fopen = 11 write = 0 fdatasync = 11 fclose = 3 rename = 15 unlink = 6 total = 48time in ms for fopen = 8 write = 0 fdatasync = 17 fclose = 1 rename = 11 unlink = 6 total = 44time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 1 rename = 8 unlink = 5 total = 32time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 1 rename = 8 unlink = 6 total = 34time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 0 rename = 8 unlink = 6 total = 34time in ms for fopen = 8 write = 0 fdatasync = 10 fclose = 0 rename = 8 unlink = 5 total = 33time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 0 rename = 7 unlink = 5 total = 33time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 1 rename = 7 unlink = 5 total = 32time in ms for fopen = 8 write = 0 fdatasync = 11 fclose = 0 rename = 7 unlink = 6 total = 34time in ms for fopen = 7 write = 0 fdatasync = 9 fclose = 0 rename = 7 unlink = 5 total = 31time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 1 rename = 8 unlink = 5 total = 32time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 0 rename = 8 unlink = 6 total = 35time in ms for fopen = 8 write = 0 fdatasync = 10 fclose = 1 rename = 7 unlink = 5 total = 32time in ms for fopen = 8 write = 0 fdatasync = 10 fclose = 1 rename = 8 unlink = 5 total = 33time in ms for fopen = 28 write = 0 fdatasync = 10 fclose = 0 rename = 7 unlink = 5 total = 54time in ms for fopen = 8 write = 0 fdatasync = 10 fclose = 1 rename = 7 unlink = 6 total = 35time in ms for fopen = 7 write = 0 fdatasync = 12 fclose = 1 rename = 11 unlink = 6 total = 39time in ms for fopen = 8 write = 0 fdatasync = 9 fclose = 0 rename = 7 unlink = 6 total = 33time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 1 rename = 8 unlink = 5 total = 35time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 1 rename = 11 unlink = 5 total = 35time in ms for fopen = 7 write = 0 fdatasync = 11 fclose = 0 rename = 8 unlink = 6 total = 35time in ms for fopen = 7 write = 0 fdatasync = 10 fclose = 1 rename = 8 unlink = 5 total = 33time in ms for fopen = 8 write = 0 fdatasync = 10 fclose = 1 rename = 7 unlink = 5 total = 33real 0m1.167sOn EFS MaxIO with 1024 MiB/s provisioned IOPS:> time ./a.out /efs_maxio_1024piopstime in ms for fopen = 35 write = 0 fdatasync = 13 fclose = 0 rename = 22 unlink = 19 total = 91time in ms for fopen = 26 write = 0 fdatasync = 12 fclose = 1 rename = 23 unlink = 19 total = 82time in ms for fopen = 29 write = 0 fdatasync = 12 fclose = 1 rename = 31 unlink = 20 total = 95time in ms for fopen = 27 write = 0 fdatasync = 13 fclose = 1 rename = 28 unlink = 19 total = 90time in ms for fopen = 25 write = 0 fdatasync = 14 fclose = 1 rename = 24 unlink = 18 total = 84time in ms for fopen = 28 write = 0 fdatasync = 11 fclose = 1 rename = 24 unlink = 22 total = 88time in ms for fopen = 24 write = 0 fdatasync = 13 fclose = 1 rename = 32 unlink = 18 total = 90time in ms for fopen = 27 write = 0 fdatasync = 11 fclose = 1 rename = 24 unlink = 19 total = 84time in ms for fopen = 24 write = 0 fdatasync = 14 fclose = 1 rename = 22 unlink = 17 total = 80time in ms for fopen = 27 write = 0 fdatasync = 12 fclose = 0 rename = 24 unlink = 21 total = 86time in ms for fopen = 26 write = 0 fdatasync = 13 fclose = 1 rename = 26 unlink = 18 total = 85time in ms for fopen = 26 write = 0 fdatasync = 13 fclose = 1 rename = 24 unlink = 17 total = 83time in ms for fopen = 26 write = 0 fdatasync = 13 fclose = 1 rename = 23 unlink = 19 total = 84time in ms for fopen = 27 write = 0 fdatasync = 12 fclose = 1 rename = 23 unlink = 18 total = 82time in ms for fopen = 28 write = 0 fdatasync = 16 fclose = 0 rename = 23 unlink = 18 total = 87time in ms for fopen = 28 write = 0 fdatasync = 13 fclose = 0 rename = 25 unlink = 19 total = 87time in ms for fopen = 24 write = 0 fdatasync = 10 fclose = 0 rename = 23 unlink = 18 total = 77time in ms for fopen = 28 write = 0 fdatasync = 15 fclose = 0 rename = 23 unlink = 19 total = 88time in ms for fopen = 26 write = 0 fdatasync = 13 fclose = 1 rename = 21 unlink = 18 total = 81time in ms for fopen = 25 write = 0 fdatasync = 13 fclose = 1 rename = 21 unlink = 16 total = 78time in ms for fopen = 24 write = 0 fdatasync = 14 fclose = 1 rename = 26 unlink = 17 total = 83time in ms for fopen = 26 write = 0 fdatasync = 14 fclose = 1 rename = 27 unlink = 20 total = 90time in ms for fopen = 27 write = 0 fdatasync = 11 fclose = 1 rename = 25 unlink = 21 total = 86time in ms for fopen = 24 write = 0 fdatasync = 11 fclose = 0 rename = 21 unlink = 17 total = 75time in ms for fopen = 29 write = 0 fdatasync = 16 fclose = 1 rename = 24 unlink = 17 total = 88time in ms for fopen = 27 write = 0 fdatasync = 13 fclose = 0 rename = 23 unlink = 31 total = 96time in ms for fopen = 25 write = 0 fdatasync = 14 fclose = 1 rename = 23 unlink = 17 total = 83time in ms for fopen = 27 write = 0 fdatasync = 13 fclose = 1 rename = 21 unlink = 17 total = 81time in ms for fopen = 28 write = 0 fdatasync = 14 fclose = 1 rename = 22 unlink = 17 total = 84time in ms for fopen = 24 write = 0 fdatasync = 13 fclose = 1 rename = 23 unlink = 18 total = 81time in ms for fopen = 26 write = 0 fdatasync = 12 fclose = 0 rename = 23 unlink = 18 total = 81real 0m2.649sOn EFS General Purpose in Bursting config:> time ./a.out /efs_bursttime in ms for fopen = 7 write = 0 fdatasync = 30 fclose = 0 rename = 25 unlink = 4 total = 68time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 5 unlink = 4 total = 23time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 5 unlink = 4 total = 23time in ms for fopen = 5 write = 0 fdatasync = 8 fclose = 0 rename = 5 unlink = 4 total = 24time in ms for fopen = 6 write = 0 fdatasync = 8 fclose = 0 rename = 5 unlink = 4 total = 25time in ms for fopen = 4 write = 0 fdatasync = 7 fclose = 0 rename = 5 unlink = 6 total = 25time in ms for fopen = 6 write = 0 fdatasync = 7 fclose = 0 rename = 6 unlink = 3 total = 25time in ms for fopen = 5 write = 0 fdatasync = 8 fclose = 0 rename = 5 unlink = 4 total = 24time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 6 unlink = 7 total = 28time in ms for fopen = 4 write = 0 fdatasync = 8 fclose = 0 rename = 5 unlink = 4 total = 23time in ms for fopen = 5 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 25time in ms for fopen = 5 write = 0 fdatasync = 9 fclose = 0 rename = 7 unlink = 5 total = 28time in ms for fopen = 6 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 26time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 6 unlink = 4 total = 24time in ms for fopen = 5 write = 0 fdatasync = 9 fclose = 0 rename = 6 unlink = 4 total = 26time in ms for fopen = 6 write = 0 fdatasync = 9 fclose = 0 rename = 6 unlink = 4 total = 27time in ms for fopen = 6 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 26time in ms for fopen = 6 write = 0 fdatasync = 7 fclose = 0 rename = 6 unlink = 3 total = 25time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 6 unlink = 4 total = 23time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 6 unlink = 4 total = 24time in ms for fopen = 7 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 26time in ms for fopen = 5 write = 0 fdatasync = 10 fclose = 0 rename = 6 unlink = 4 total = 28time in ms for fopen = 5 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 25time in ms for fopen = 5 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 25time in ms for fopen = 5 write = 0 fdatasync = 11 fclose = 0 rename = 5 unlink = 4 total = 27time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 5 unlink = 3 total = 23time in ms for fopen = 6 write = 0 fdatasync = 16 fclose = 0 rename = 6 unlink = 4 total = 33time in ms for fopen = 7 write = 0 fdatasync = 8 fclose = 0 rename = 6 unlink = 4 total = 26time in ms for fopen = 5 write = 0 fdatasync = 7 fclose = 0 rename = 5 unlink = 4 total = 23time in ms for fopen = 4 write = 0 fdatasync = 7 fclose = 0 rename = 5 unlink = 4 total = 23time in ms for fopen = 4 write = 0 fdatasync = 7 fclose = 0 rename = 7 unlink = 3 total = 24real 0m0.845sThanks again for any input.Edited by: Indiana on Apr 25, 2019 12:28 PMEdited by: Indiana on Apr 25, 2019 12:28 PMEdited by: Indiana on Apr 26, 2019 2:40 AMEdited by: Indiana on Apr 26, 2019 2:54 AMFollowComment"
"Minimal latency for EFS OPEN(new file), WRITE(1 byte), RENAME and REMOVE?"
https://repost.aws/questions/QUj7il0TUzS4SjRQ9znJIFQw/minimal-latency-for-efs-open-new-file-write-1-byte-rename-and-remove
false
"0Hi Indiana,We will be sending you a private message shortly regarding this post.Thanks,LilianAmazon EFS TeamCommentShareAWS-User-0359576answered 4 years ago"
"Hi everybody,I wanna ask you about AWS Transcribe Analytics Call.API is well with AWS Transcribe but I need also sentiment Analysis, so I try to use AWS Transcribe Analytics. There is my code :from __future__ import print_functionimport timeimport boto3transcribe = boto3.client('transcribe', 'us-east-1')job_name = "my-first-call-analytics-job"job_uri = "PATH_S3_TO_WAV_WHO_HAD_WORD_FOR_AWS_TRANSCRIBE"output_location = "PATH_TO_CREATED_FOLDER"data_access_role = "arn:aws:s3:::MY_BUCKET_NAME_WHERE_WAV_FILES"transcribe.start_call_analytics_job( CallAnalyticsJobName = job_name, Media = { 'MediaFileUri': job_uri }, DataAccessRoleArn = data_access_role, OutputLocation = output_location, ChannelDefinitions = [ { 'ChannelId': 0, 'ParticipantRole': 'AGENT' }, { 'ChannelId': 1, 'ParticipantRole': 'CUSTOMER' } ]) while True: status = transcribe.get_call_analytics_job(CallAnalyticsJobName = job_name) if status['CallAnalyticsJob']['CallAnalyticsJobStatus'] in ['COMPLETED', 'FAILED']: break print("Not ready yet...") time.sleep(5)print(status)I had done aws configure and I use a IAM user who have AdministratorAccess.botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the StartCallAnalyticsJob operation: User: MY_ARN_USER is not authorized to access this resourceAny help please ?Thank you very much!FollowComment"
StartCallAnalyticsJob : User is not authorized to access this resource
https://repost.aws/questions/QUdFvEaGFgR9mjXTyKIxvsMg/startcallanalyticsjob-user-is-not-authorized-to-access-this-resource
false
"0Looks like your script doesn't have the right IAM permission to call this API. How are you configuring the credentials for this script? Take a look at this docuemntation for how to configure it. Also make sure the IAM user/role you are using has the correct permission to access Transcribe. To start with, you can use the managed AmazonTranscribeFullAccess policy and later shrink down to only the operations you need.CommentShareS Lyuanswered a year agorePost-User-7270310 a year agoHello I repeat the following steps and I get the same error.I creat a new IAM account Checked Access key - Access by programming Add permission AmazonTranscribeFullAccessI enter "aws configure" on terminal then the Access Key ID, Secret Access Key, region us-east-1 and output format json. I run the code and still get the error botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the StartCallAnalyticsJob operation: User: [ARN USER THAT I JUST CREATED] is not authorized to access this resourceAny help please ? I don't understand why AWS Transcribe work but AWS Transcribe Analytics not working ....ShareS Lyu a year agoIf the error message says ...User: [ARN USER THAT I JUST CREATED] is not authorized to access this resource then it means the python script is picking up the correct user. I also double-checked that the policy AmazonTranscribeFullAccess contains the StartCallAnalyticsJob permission. You can maybe try explicitly add the transcribe:StartCallAnalyticsJob permission to your IAM user.Are you using a personal account or a company account? Is the account in a AWS Organization? Maybe there are company-wide IAM permissions boundary that blocks you from using it?Share"
"Given specific geospatial points for a route (e.g. a flight route), can we show the route on a map through Quicksight? (similar to what can be done in plotly: https://medium.com/technology-hits/working-with-maps-in-python-with-mapbox-and-plotly-6f454522ccdd)FollowComment"
Displaying flight routes on Maps in Quicksight
https://repost.aws/questions/QUnH71XsXHTkygCsep_EDe3Q/displaying-flight-routes-on-maps-in-quicksight
false
"0Hello,Sharing an option that's possible in AWS: Quicksight has capabilities to plot such data on an interactive graph:https://docs.aws.amazon.com/quicksight/latest/user/filled-maps.htmlhttps://docs.aws.amazon.com/quicksight/latest/user/geospatial-charts.htmlhttps://towardsdatascience.com/awesome-aws-quicksight-visuals-every-data-analyst-should-know-e4e9302b2711 (scroll to points on map)However, QuickSight does not fully support custom visuals(if you try to achieve it) but does have limited support via this method: -https://docs.aws.amazon.com/quicksight/latest/user/custom-visual-content.html?icmpid=docs_quicksight_console_rssdochistoryYou can pass a parameter but it would not be fully integrated as a visual.Thank you !CommentSharePriyaAWS2020answered a year agoEXPERTFabrizio@AWSreviewed a year ago"
"I have created a new Redshift using CfnCluster construct with snapshotIdentifier specified.The snapshot contains one table and some data in it.However, the newly-created cluster does not contain the table or data.Do I miss something to restore a cluster from a snapshot ?FollowComment"
CDK: How to restore a Redshift from a Snapshot ?
https://repost.aws/questions/QUXatWgwfvSMmRkhzuvsxfMg/cdk-how-to-restore-a-redshift-from-a-snapshot
false
"1The snapshot identifier indicates the name of the snapshot from which the new cluster will be created. The stack looks good, however the issue needs further investigation to understand why the data is not restored from the snapshot.I request you to Create a support case so that we may further dive into the specifics of your configuration.CommentShareSUPPORT ENGINEERAWS-User-4315904answered a year ago0Problem solved for me.This was because the snapshotIdentifier passed from external file was undefined.My bad.CommentShareTazanswered a year ago"
"Hi,I'm running an experiement with Evidently.I produced some events and I'd be expecting a variation to be selected as the "winner".However, even though the best variation seems pretty obvious to me, it's showing the result as "Inconclusive".That's ok, maybe it needs more data - but it's not even showing the "Average" column!:Is there a minimum number of events until we see the average at least or something like that?Many thanks!DavidFollowComment"
"Evidently not showing experiment results, not even the average"
https://repost.aws/questions/QUvwKtzmkuSf24M1HBv6nV1A/evidently-not-showing-experiment-results-not-even-the-average
false
"I'm using the EC2 API's DescribeInstances method, which returns 'platform' but not 'platformDetails':https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.htmlI can see both 'platform' and 'platformDetails' properties for my EC2 VMs when viewing them in the admin console, and they are described in the Instance type:https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_Instance.htmlHow can I retrieve the 'platformDetails' via the web API?ThanksFollowComment"
How can I retrieve the 'platformDetails' property via the EC2 API?
https://repost.aws/questions/QUj1EckuzUTUG20KF2RIsVJQ/how-can-i-retrieve-the-platformdetails-property-via-the-ec2-api
false
"1I have used AWS CLI ec2 describe-instances, and successfully retreived the value of platformDetails.You may use describe-instances for platformDetails fields.Here is what I executed.$ aws --debug ec2 describe-instances --query 'Reservations[].Instances[].{id:InstanceId,platformDetails:PlatformDetails}'You can get the debug output from the response body to describeInstance API.I can find the platformDetails field in the response body XML.CommentShareJunseong_Janswered 9 months agoEXPERTChris_Greviewed 9 months ago"
"I want to connect with aws iot loarawn service without using physical device and gateway. In aws lorawan workshop they were manually ingesting data into mqtt topic (in no physical device scenario). But I want to connect using lns or cups endpoint which we get after registering gateway in aws iot lorawan service. So, are there any device or gateway simulators available using which I can connect with this aws iot lorawan service?FollowComment"
Lorawan device/gateway simulators available to connect to aws iot lorawan service?
https://repost.aws/questions/QUKLTHBkArTMGYYiE8bYnqdQ/lorawan-device-gateway-simulators-available-to-connect-to-aws-iot-lorawan-service
false
"0Hi, other than what Greg suggested, the service team is trying to provide some simulation tool as well. But it could come later. Can I know the purpose of you using the simulator? Is it to check your application receiving the payload or to try to find out some load testing result?CommentShareAWS-User-Russanswered 19 days agoshivam_khanna 17 days agoI am just trying to establish connection with iot Lora wan service using device and gateway simulators, and check whether we receive payload in this Lora wan service just like physical device scenario. The main motive is to establish connection with lns server using simulators. So, are there any simulators available?Share0Hi. The nearest thing I'm aware of is the Basic Station simulation framework.Tests can also be composed of the individual building blocks, for instance using real hardware against mock backend components or simulated hardware against real backend components.https://github.com/lorabasics/basicstation/tree/master/examples/simulationCommentShareEXPERTGreg_Banswered 19 days ago"
"My instance https://us-west-1.console.aws.amazon.com/ec2/v2/home?region=us-west-1#InstanceDetails:instanceId=i-c6edef82 is not accessible using ssh anymore (was fine before). Restart doesn't help.I assigned the role https://console.aws.amazon.com/iam/home?region=us-east-1#/roles/MySessionManagerInstanceProfile to it, but still can't connect using session manager. Can't use the serial console because I don't have a user with password.Please help.FollowComment"
Can't connect to my instance anymore
https://repost.aws/questions/QUpKKWjGmlT1WjQe14SxA4Rw/can-t-connect-to-my-instance-anymore
false
"0Looks like a side effect of VM upgrade triggered by AWS recently, I believe it turned on firewall that was not there before. Is there any way to fix it or revert?CommentShareD. Shultzanswered 2 years ago0UPDATE: this is resolvedCommentShareD. Shultzanswered 2 years ago0Elastic IP became detached during the updateCommentShareD. Shultzanswered 2 years ago"
"Is there any way to automatically move all WorkMail email attachments to S3 bucket and get their link of course.?If yes, how can I do that?ThanksRegardsFollowComment"
Move attachment to S3
https://repost.aws/questions/QUVSVewxl0QFqjOX51wF7rwQ/move-attachment-to-s3
true
"2Accepted AnswerHi,I would look into associating a lambda to work mail: https://docs.aws.amazon.com/workmail/latest/adminguide/lambda.html.Once an email is sent, the lambda should be triggered. The function would get the message and extract its attachments. Finally it would save them to S3.Hope it helps.CommentShareEXPERTalatechanswered 2 months agoplanetadeleste 2 months agoMany thanks, this what I looking for!Sharerobinkaws MODERATOR2 months agoHi, maybe this is a good starting point for you: https://serverlessrepo.aws.amazon.com/applications/us-east-1/489970191081/workmail-save-and-update-emailRobinShare"
"According to the documentation Greengrass Nucleus should rotate logs every hour (https://docs.aws.amazon.com/greengrass/v2/developerguide/log-manager-component.html). So I suppose no matter of the log size I should see a new file every hour. So the maximum delay of logs uploaded with LogManager would be 1 hour. I see some "random" rotations however. My configuration of Nucleus is default. Example of my rotations:-rw-r--r-- 1 root root 31953 Jun 6 07:32 greengrass.log-rw-r--r-- 1 root root 212 May 30 22:40 greengrass_2022_05_30_22_0.log-rw-r--r-- 1 root root 212 Jun 2 12:12 greengrass_2022_06_02_12_0.log-rw-r--r-- 1 root root 736 Jun 2 14:21 greengrass_2022_06_02_14_0.log-rw-r--r-- 1 root root 126334 Jun 2 20:40 greengrass_2022_06_02_20_0.log-rw-r--r-- 1 root root 53873 Jun 3 12:30 greengrass_2022_06_03_12_0.log-rw-r--r-- 1 root root 467 Jun 3 17:54 greengrass_2022_06_03_17_0.log-rw-r--r-- 1 root root 13527 Jun 3 19:20 greengrass_2022_06_03_19_0.log-rw-r--r-- 1 root root 212 Jun 6 06:10 greengrass_2022_06_06_06_0.logCould you clarify the 1 hour rotation rule?FollowCommentJanice-AWS a year agoHere are some details about greengrass log rotation you might find helpful:Greengrass logs rotate every hour or when the file reaches a size limit, whichever is sooner. The default file size limit is 1,024 KB (1 MB) and is configurable as fileSizeKB here:https://docs.aws.amazon.com/greengrass/v2/developerguide/greengrass-nucleus-component.html#:~:text=Default%3A%20FILE-,fileSizeKB,-(Optional)%20The%20maximumThe logs do not rotate if there has been no activity in the log since the last rotation.Logs will be deleted (earliest first) when the logs hit a disk space limit. The default limit for the greengrass log and each component log is 10,240 KB (10 MB), and is configurable as totalLogsSizeKB here:https://docs.aws.amazon.com/greengrass/v2/developerguide/greengrass-nucleus-component.html#:~:text=Default%3A%201024-,totalLogsSizeKB,-(Optional)%20The%20maximumShare"
Greengrass doesn't rotate logs every hour
https://repost.aws/questions/QULX6ZwLaaTf2WgK4DfdGLyg/greengrass-doesn-t-rotate-logs-every-hour
false
"Hi, I tried to enable DNSSEC on my domain drustcraft.com.au, however later found out that .com.au is not supported on AWS as a registrar. So I reversed what I done in hosted zones, however there is still a DS and RRSIG records being returned when I use dig after waiting the TTL. So I'm not really sure how that has happened.Registered Domains shows a DNSSEC status of "Not available for this TLD"Hosted Zones shows DNSSEC signing status of "Not signing"Have I borked my domain?, am I being impatient and need to wait longer?, any advice on where to go from here would be appreciated.Edited by: nomadjimbob on Apr 7, 2021 9:51 PMEdited by: nomadjimbob on Apr 8, 2021 2:30 AMFollowComment"
Cannot disable DNSSEC for domain
https://repost.aws/questions/QUYYdrdSo2QeaN42rOHzcO6Q/cannot-disable-dnssec-for-domain
false
0I've decided to instead transfer my domain and instances to another provider.CommentSharenomadjimbobanswered 2 years ago
"Is there a table that maps xid/pid to table id/schema name/database name? I found SVV_TRANSACTIONS, but that only has data on currently running transactions/locks and I want something that preferably covers 2-5 days of history like the STL tables.FollowComment"
How to get schema/database/table names associated with rows in STL_UTILITYTEXT
https://repost.aws/questions/QUeOGQB3Q-QiudDmQLLGIZuQ/how-to-get-schema-database-table-names-associated-with-rows-in-stl-utilitytext
false
"0Hi,Have you tried STL_TR_CONFLICT? https://docs.aws.amazon.com/redshift/latest/dg/r_STL_TR_CONFLICT.htmlRegards,CommentShareZiadanswered 4 months agorePost-User-6605802 3 months agoHi, is it guaranteed that all processes/transactions will be in STL_TR_CONFLICT? I thought that wasn't the case since not all transactions will conflict with others.ShareZiad 3 months agoHi, It contains all the transactions that faced conflicts. Are you looking also for transactions without conflicts?SharerePost-User-6605802 3 months agoYes, is that possible?ShareZiad 3 months agoNot that I'm aware of.Share"
"We are using pinpoint email for sending emails .Are there any way to know whether the email is delivered or not ?If the email is not delivered, does Pinpoint provide any way to identify this by sending some notifications for us to listen and act up on.Are there any workaround for doing this ?If pinpoint is not providing this functionality, does SES have this feature ?FollowComment"
Notification for email delivery status from Pinpoint Email or in SES
https://repost.aws/questions/QU5Bb1Xu1gQ5GpBSCS6mJSkw/notification-for-email-delivery-status-from-pinpoint-email-or-in-ses
false
"0Both Amazon SES and Amazon Pinpoint provide the functionality that you need. With Pinpoint, you need to enable event streaming. When you do that, you'll receive event notifications when emails are sent, delivered, clicked, bounced, etc. The delivery events will resemble the following example:{ "event_type": "_email.delivered", "event_timestamp": 1564618621380, "arrival_timestamp": 1564618622690, "event_version": "3.1", "application": { "app_id": "a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6", "sdk": {} }, "client": { "client_id": "e9a3000d-daa2-40dc-ac47-1cd34example" }, "device": { "platform": {} }, "session": {}, "attributes": { "feedback": "delivered" }, "awsAccountId": "123456789012", "facets": { "email_channel": { "mail_event": { "mail": { "message_id": "0200000073rnbmd1-mbvdg3uo-q8ia-m3ku-ibd3-ms77kexample-000000", "message_send_timestamp": 1564618621380, "from_address": "sender@example.com", "destination": ["recipient@example.com"], "headers_truncated": false, "headers": [{ "name": "From", "value": "sender@example.com" }, { "name": "To", "value": "recipient@example.com" }, { "name": "Subject", "value": "Amazon Pinpoint Test" }, { "name": "MIME-Version", "value": "1.0" }, { "name": "Content-Type", "value": "multipart/alternative; boundary=\"----=_Part_314159_271828\"" }], "common_headers": { "from": "sender@example.com", "to": ["recipient@example.com"], "subject": "Amazon Pinpoint Test" } }, "delivery": { "smtp_response": "250 ok: Message 82080542 accepted", "reporting_mta": "a8-53.smtp-out.amazonses.com", "recipients": ["recipient@example.com"], "processing_time_millis": 1310 } } } }}CommentShareEXPERTBrentAtAWSanswered 2 months agoG V Navin a month agoI think Enabling event stream will create kinesis streams. It will be bit costly I guess. (please correct me if i am wrong) Are there any other alternative cost effective solution ?ShareBrentAtAWS EXPERTa month agoI think Enabling event stream will create kinesis streams. It will be bit costly I guess. (please correct me if i am wrong) Are there any other alternative cost effective solution ?You can use Kinesis Data Firehose to stream this data. In my opinion, this is a cost-effective option.The sample event record I showed in my previous post is about 2KB. Let's assume you send 1 million messages every day for a month, and each message generates three event records (send, delivery, open). That would be 180GB of data over the course of one month. The cost for Kinesis Data Firehose streaming is USD$0.029 per GB. Your total charge for streaming this data would be $5.22 per month. If you stored the data in S3, that would be an additional $4.14 per month ($0.023 * 180GB).Share"
"Good evening everyone.Here on my company, we use a few AWS resources (some S3 buckets, a Beanstalk application, 2 RDS and an ElasticCache cluster). But, we have a lot of metrics (none of this resourses use custom metrics). As example, only on this 10 days, Cloudwatch costs us 58USD. (Our RDS is the second biggest cost ATM, and now is 20USD.)Can you guys give-me some help how I can find what is generate this cloudwatch bill?FollowComment"
Weird costs on Cloudwatch
https://repost.aws/questions/QU53-MyjIaRXefrc5aEj7Scw/weird-costs-on-cloudwatch
false
"1HiThere are two options for your inquiry.Analyze CloudWatch cost and usage data with Cost Explorer : Access to the Cost Explorer and filters for service with CloudWatch. Then choose Usage Type to Group By. You will see the which API Operation and Region generated the most costs. See the below link for the detail.Analyze CloudWatch cost and usage data with AWS Cost and Usage Reports and Athena: Another way to analyze CloudWatch cost and usage data is by using AWS Cost and Usage Reports with Amazon Athena. AWS Cost and Usage Reports contain a comprehensive set of cost and usage data. You can create reports that track your costs and usage, and you can publish these reports to an S3 bucket of your choicehttps://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_billing.htmlTo prevent cost surprise, I recommend using AWS Cost Anomaly Detection :https://aws.amazon.com/aws-cost-management/aws-cost-anomaly-detection/Thank youCommentShareAWS-User-9882077answered 25 days agoSamuel Grave 25 days agoHello, friend.I forgot to tell about this part.I already check on Costs Explorer and on Athena.On Athena, I saw most of the usage (95% of it) is on UsaseType CW:MetricMonitorUsage.As far I researched, It is a Custom Metric, right?It could be generated by Cost Explorer Reports?Share1Hi Samuel,UsaseType CW:MetricMonitorUsage is the custom metric. All the metrics under 'Custom namespaces' are charged.You can find the which API is the majority of usage as below filter setting.In order to start identifying where the majority of usage is coming from, you can set the Cost Explorer chart to filter the output by any "Usage Type" containing the word "MetricMonitorUsage", and group by "API Operation"Set the Filter by Usage Type (RegionCode-CW):MetricMonitorUsageGroup By Dimension = API OperationThank youCommentShareAWS-User-9882077answered 25 days agoSamuel Grave 25 days agoIt just shows "MetricStorage usage" as cost sourceFound it!My Beanstalk application have like 600 "NOT AWS" Custom Metrics configured on it. But when I check the instance, the monitoring is using the basic monitoring.Share0Does your beanstalk application publish logs in structured logs (called "EMF" for Embedded Metrics Format) format by any chance? If so, that could be a source of custom metrics.CommentShareJscanswered 24 days agoSamuel Grave 24 days agoit's a metabasse instance.I just checked the .ebextension file and there's a cloudwatch file with a lot of metrics.Thanks a lot guys!Share"
"Working on testing out Babelfish connectivity, which port should I be using? The same one that I use for connecting with pgadmin? Any links to documentation would be appreciated. (I'm currently in the preview program, please DM me if responses can't be posted here.)FollowComment"
What port does Babelfish work over and should I be able to test with osql?
https://repost.aws/questions/QUPOShTqhqQbmI3BpuIeUE2g/what-port-does-babelfish-work-over-and-should-i-be-able-to-test-with-osql
false
"0I've tried this so far. Added port TCP 1433 to the security group in addition to 1434 UDP, and the TCP port associated with my server. When I try to connect via SQL Server Management Studio I get these error messages.Specified cast is not valid:(Microsoft.SqlServer.ConnectionInfo)CommentSharecodingjoe-higanswered 2 years agolight a year agoYou should ideally upgrade to Babelfish 2.1+ (simply deploy Aurora 14.3 or higher) which improves SSMS support. However, if you're on an old version, you need to hit cancel on the default SSMS login and then connect using the "New Query" button (ie, the Query Editor). Blog with screenshots: https://aws.amazon.com/blogs/database/get-started-with-babelfish-for-aurora-postgresql/Share0Reached out to the babelfish-preview email address and got a response back.Had to connect via first creating a new query window in SSMS and then changing/starting the connection. Also had to specify the database name in the connection information. In the actual query, I had to include my tablename in quotes because I think SSMS was converting it to lowercase and my table name contained an uppercase letter.Also works with sql-cmd. Which is fine since it replaced osql forever ago. Just had to supply -d parameter with database.Edited by: codingjoe-hig on Jun 15, 2021 11:26 AMCommentSharecodingjoe-higanswered 2 years agolight a year agoThis applies to the Preview version. Babelfish 1.2.0 or above supports SSMS login out of the box. Simply connect with your user, password and server address. You will need to ensure that you open ports for SQL (1433) and, assuming you are using PostgreSQL, the default PostgreSQL port as well in your security groups.Share0Which port should I be using?Babelfish creates an additional port for the TDS protocol, by default, 1433. You can simultaneously connect via the PostgreSQL port (default 5432) with pgAdmin. (You can browse your data live from both ports, using different SQL clients for TDS and PostgreSQL.)How do I connect using Babelfish?The easiest way to get started is to use SSMS, which is free to download. When making your Aurora cluster, double check that you select PostgreSQL 13.6 or higher (which includes Babelfish 1.2.0+). Older versions of Babelfish have very limited SSMS support, which require workarounds.Babelfish is continually improving SSMS support, so choose the latest version of PostgreSQL, which will have the most up-to-date support.Babelfish documentation can be found here.CommentSharelightanswered a year ago"
"So I'm using KMS to sign JWT token. However I have been unable to verify the signature using the SDK. The snippet (in node) is as follows.let token_components = { header: base64url(JSON.stringify(headers)), payload: base64url(JSON.stringify(payload)),};let message = Buffer.from(token_components.header + "." + token_components.payload)let res1 = await kms.sign({ KeyId: 'arn:xxx', Message: message, SigningAlgorithm: 'RSASSA_PKCS1_V1_5_SHA_256', MessageType: 'RAW'}).promise()token_components.signature = res1.Signature.toString("base64").replace(/\+/g, '-').replace(/\//g, '_').replace(/=/g, '')let res2 = await kms.verify({ KeyId: 'arn:xxx', Message: message, Signature: token_components.signature, SigningAlgorithm: 'RSASSA_PKCS1_V1_5_SHA_256', MessageType: 'RAW'}).promise()With third party library the signature produced from sign can be verified using public key. But using KMS SDK the kms.verify method always fails with invalid signature exception. Referring from the documentation I think it should work as message and signature need to be either in Buffer (node's byte array) or String encoded in Base64. I'm not sure what went wrong and any help is greatly appreciated.Edited by: inmyth on Mar 5, 2021 7:27 AMEdited by: inmyth on Mar 5, 2021 7:28 AMFollowComment"
Cannot verify KMS signed message
https://repost.aws/questions/QUMj3eDD6sRXWWaZizjTVktw/cannot-verify-kms-signed-message
false
"0Figured it out. Basically the signature must not be url encoded (backslashes, dashes, equals have to be preserved). The input argument for verify should be its decoded base64 in byte array.CommentShareinmythanswered 2 years ago"
"This morning our Elastic Beanstalk environment has been automatically upgraded from Tomcat 8.5 Corretto 11 4.2.8 to Tomcat 8.5 Corretto 11 4.2.9.The application running in this environment (which is a Java Application Backend) started throwing the following error:Access to XMLHttpRequest at 'https://backend.example.com/api/exampleservice/service?exempleparam=1000' from origin 'https://frontend.example.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.This error has never been thrown with previous minor versions of Tomcat 8.5 Correto 11 4.2 (i.e. 6,7 and 8).In order to further investigate this issue we have disabled web security in Chrome to avoid the CORS check perfromed by the browser. Under this configuration we received a 404 HTTP error when issuing the request.Seems like our .war application is not properly deployed in Tomcat 8.5 Correto 11 4.2.9. We would like to know if this minor version requieres specific configuration that we might be ignoring.We had to move back the Elastick Beanstalk environment to Tomcat 8.5 Corretto 11 4.2.8. Everything worked properly with the same .war. No CORS errors, no 404 Errors.Can anybody provide some advice? Is it a Tomcat 8.5 Corretto 11 4.2.9 problem that will be fixed by AWS or is it necessary to perform particular configuration on Elastic Beanstalk to properly manage applications using this particular application server minor version.Thanks in advance.FollowComment"
Elastic Beanstalk Tomcat 8.5 Corretto 11 4.2.9: Requests blocked by CORS policy + 404 Error
https://repost.aws/questions/QUncVsXMQJSgOm7_nF4B3yLw/elastic-beanstalk-tomcat-8-5-corretto-11-4-2-9-requests-blocked-by-cors-policy-404-error
true
0Accepted AnswerJust to close the loop. Deploying our .war in Tomcat 8.5 Correto 11 4.2.11. works just fine. We did not make any adjustments to code or configuration. Just change from Tomcat 8.5 Correto 11 4.2.8 to Tomcat 8.5 Correto 11 4.2.11.There has been probabily issues with Tomcat 8.5 Correto 11 4.2.9 and Tomcat 8.5 Correto 11 4.2.10 since the are not available options anymore in the Elastic Beanstalk environment.CommentShareAWS-User-9094158answered a year ago
I am using Polly on my blog for all posts. For each post there are up to 10 separate audio files generated within the media tab. Why is there more than one audio file per blog post? And how do I correct this?FollowComment
Why is Polly TTS generating multiple audio files per post within WordPress Media tab
https://repost.aws/questions/QUgPNXvkicQKKuL4RSy4Wm1Q/why-is-polly-tts-generating-multiple-audio-files-per-post-within-wordpress-media-tab
false
0Have you configured creating audio for content in multiple languages? Can you confirm what is in that audio files? All same or different languages?CommentShareSanjay Aggarwalanswered 7 months ago
"I am new to cognito, I can see there is a cognito sdk for .net framework application.if I use the sdk, does it mean I need to create my own signin UI, so I cannot get the benefit on the host-UI by cognito?using sdk, how can I federate azure ad in the code? do I also need to create a UI to display the whole flow and redirect user to the microsoft signin page?what are the steps to link with multiple Azure (tenants) organizations in the cognito?ThanksLeiFollowComment"
Cognito .net framework - federate azure ad
https://repost.aws/questions/QU7bWpZDWRSj2t-1HeDLB8_w/cognito-net-framework-federate-azure-ad
false
"0Hi Lei, I would recommend reviewing the Amazon Cognito and AWS SDK for .NET Developer Guides if you haven't already, as there are sections on multi-tenancy and examples that may help answer your questions. There is also an AWS Security blog post about setting up Amazon Cognito for federated authentication using Azure AD.Additionally, if you can provide more details on the authentication flows/use-case(s) that you're looking to support, someone might be able to give a more detailed answer. Hope that helps!CommentShareSierra Washingtonanswered 9 months ago"
"Can the Comprehend model return a full text from a context phrase that I search for?For example, suppose this text below is an area in a newspaper (in .PDF format) that contains several subjects and that talks about Weddings in a specific area containing title and body text:Wedding 2022We had a very traditional wedding and it was extremely expensive, but it was worth it. Carol and I only paid half. Her parents paid for everything else. We got married in church. Carol wore a white dress and she looked fantastic. I wore a suit and I think I looked quite good too! We had a big reception. We had 200 guests. The reception was in a wonderful hotel. We took lots of pictures. It was just great!If I send the model “Wedding 2022” and “We got married in church” will it be able to find this text among different themes and will I be able to receive all this text below?Wedding 2022We had a very traditional wedding and it was extremely expensive, but it was worth it. Carol and I only paid half. Her parents paid for everything else. We got married in church. Carol wore a white dress and she looked fantastic. I wore a suit and I think I looked quite good too! We had a big reception. We had 200 guests. The reception was in a wonderful hotel. We took lots of pictures. It was just great!Is Comprehend the best tool to try to solve this problem?FollowComment"
Comprehend can find text?
https://repost.aws/questions/QUkV-eSttVRCm6y-MHW5i9iQ/comprehend-can-find-text
false
"1Comprehend is not a search tool. It is an API that will make it easy to :Detect the dominant languageDetect named entitiesDetect key phrasesDetermine sentimentAnalyze targeted sentimentDetect syntaxDetect eventsDo Topic modelingfrom documents you provide through the real-time or batch API.It will provide json formatted response containing the inferred elements. For instance:{ "LanguageCode": "en", "KeyPhrases": [ { "Text": "today", "Score": 0.89, "BeginOffset": 14, "EndOffset": 19 }, { "Text": "Seattle", "Score": 0.91, "BeginOffset": 23, "EndOffset": 30 } ]}Notice that the response contains BeginOffset and EndOffset which tell you where the entity was detected in the document should you want to pull the text (or more text arround it) from the document.If your objective is to do natural language full text search on documents, I'd recommend looking into Amazon Kendra (https://aws.amazon.com/kendra/)If you want to see both these solutions in action to provide Knowledge extraction and natural language search powered by AI/ML you can check out the Document Understanding Solution : https://aws.amazon.com/solutions/implementations/document-understanding-solution/CommentShareJean Malhaanswered a year ago"
"Would like to route API Gateway invocation based on source IP Address. Eg. is source IP 10.x.x.x then invoke function A, if source IP 11.y.y.y then invoke function B. Similar with what Route53 supports for routing based on IP Address but we don't have access to Route53.Thank you in advance,LucianFollowComment"
API Gateway set stage variable based on source IP Address
https://repost.aws/questions/QU0_2WoCNvTOeKBzTH32z12Q/api-gateway-set-stage-variable-based-on-source-ip-address
false
"2API Gateway does not support content based routing. One option might be to invoke a Lambda function that will invoke the appropriate backend.CommentShareEXPERTUrianswered 4 months agoEXPERTiwasareviewed 3 months ago1A potential design could be that you create an API Gateway backed by a Step Function. (https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-api-gateway.html).Then you could pass the IP or get it as part of header/payload (encrypted if you wish), passit to a Choice step that based on that decides which Step/Lambda function to call.CommentShareEXPERTalatechanswered 4 months ago"
"Hello,I am looking for an automated way to ship/stream RDS logs 'Postgresql' to S3 to be picked up by another lambda function in order to ship the file to elastic cluster running on EC2.I published the logs to CloudWatch logs but the only way to send it to S3 is by manually selecting the bucket and the date range. There is option to use CloudWatch Logs Subscription Filters to stream the logs to S3. However, we are looking for something more simpler and cheaper.Any thoughts?FollowComment"
RDS/Postgres logs to S3
https://repost.aws/questions/QU8okjmEiJQWqlE9KJEYz7tQ/rds-postgres-logs-to-s3
true
"0Accepted AnswerI think that Kinesis Data stream might be best fit for this case. (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs//Subscriptions.html) - Please, use kinesis data stream instead of kinesis firehose.CommentShareAWS-User-1317674answered 4 years ago"
"Hey, I have a webhook endpoint where our service provider send a payload which I have to respond to within 2 seconds. I've been getting way too many timeout errors from the service provider, meaning I wasn't able to respond within 2 seconds.I did some digging as to when the Fargate Server gets the payload vs when the ALB receives it. I went through some of the access logs from the ALB and found that it takes about a second or so to pass the payload from ALB to the fargate server.Here's the timestamp at which the request arrived to the ALB - 15:19:20.01 and my server recieved it at - 15:19:21.69.There's over a second of difference, I wanna know how to reduce it. One of the solution I thought of was that instead of registering my domain + the URI to the service provider to send webhook to, I set my IP + the URI so there's no need of forwarding done by ALB.Let me know what you guys think.EDIT - The solution I thought of was pretty stupid because fargate provides a new IP everytime a new task is deployed (as far as I know). Also the ALB forwards the request / payload to the ECS Target Group, just throwing this fact in as well.FollowCommentMichael_F EXPERTa year agoBefore we try to dive too deep into the potential cause, let's make sure that you're looking at the right data. Instead of looking at timestamps (which might reflect the time the request was completed instead of when it was received), let's look instead at the latency data published in the ALB logs. The particular fields to look at in the ALB logs are the request_processing_time and target_processing_time fields as described here: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html#access-log-entry-syntax - the 6th and 7th fields.ShareMichael_F EXPERTa year agoThe target_processing_time field is the most important field: "The total time elapsed (in seconds, with millisecond precision) from the time the load balancer sent the request to a target until the target started to send the response headers." Also, please compare against the actual response latency from your application logs if possible; timestamps aren't enough because they are often vague. My experience is that ALB request latency to a Fargate target is in the sub-millisecond range.Share"
How to reduce the time it takes a request to pass from a ALB to the actual Fargate Server?
https://repost.aws/questions/QUN5t2CToPQhee_JqHw9Kz0A/how-to-reduce-the-time-it-takes-a-request-to-pass-from-a-alb-to-the-actual-fargate-server
false
"0Hello,As specified by Michael_F in the comments, it is essential to identify the root cause for your latency before we jump into figuring out the solution. There could be multiple reasons for the latency. You can find out the time taken during various phases of the HTTP connection using curl as shown below.curl -L --output /dev/null --silent --show-error --write-out 'lookup: %{time_namelookup}\nconnect: %{time_appconnect}\npretransfer: %{time_pretransfer}\nredirect: %{time_redirect}\nstarttransfer: %{time_starttransfer}\ntotal: %{time_total}\n' 'google.com'Output for the above request looks something like below:lookup: 0.002451connect: 0.000000pretransfer: 0.011611redirect: 0.016241starttransfer: 0.068402total: 0.075255Using this info, you can pin-point your request latency to a specific phase in the request-response process, and further investigate the root cause.It is also helpful to enable ALB access logs as mentioned by Michael_F in the comments.If you are unable to figure out the problem, please feel free to reach out to AWS Support to investigate the issue by following this link: https://docs.aws.amazon.com/awssupport/latest/user/case-management.html#creating-a-support-caseI hope this info is helpful to you!CommentShareSUPPORT ENGINEERVenkat Penmetsaanswered a year ago"
"Hi,I am trying to create an HLS live streaming pipeline with MediaLive:INPUT: either a MP4 file or a fMP4 HLS containing a HEVC video with alpha channel.OUTPUT: a fMP4 HLS stream of HEVC video with alpha channel.The channel fails to start with the following errors.input is a MP4 file: Input failed over to input [Input Name] id[1]input is fMP4 HLS: Unable to open input file [https://cloudstorage.com/hlsPlaylist1/segment.m4s]: [Failed probe/open: [No parser found for container]]I need to precise that my channel works fine when the input is HEVC without alpha channel. Is there a plan to add HEVC with alpha functionality to MediaLive? ThanksFollowComment"
HEVC + alpha channel input and output on MediaLive
https://repost.aws/questions/QUnLNwXHWwTFObZv8lgZf9EQ/hevc-alpha-channel-input-and-output-on-medialive
false
"0It seems that you are using the MediaLive Input failover to choose between two different file types. Typically when using input failover, inputs are the same format.Also the failed file name is segment.m4s. Did you point MediaLive's failover input to a .smil file? It may not know how to use a .m4s. Perhaps you could change it to a .mp4 extension.CommentShareMike-MEanswered 10 months ago0To clarify I am trying to create a MediaLive channel with only one input (not HLS and MP4 file simultaneously). My problem is about HEVC alpha channel so please ignore the situation with HLS and let's focus only on MP4 file input.The problem is simple, if I use a MP4 HEVC (without alpha) file as input there is no problem. Now if I use a MP4 HEVC + alpha channel file, the channel fails to start:Input failed over to input [Transparent Video] id[1]CommentSharechillyjeeanswered 10 months ago"
"I'm getting an error on all instances in elasticbeanstalk. I tried to reset all instances. Terminate all of them and deleted all of them in EC2 dashboard console. I have created application in elasticbean stalk once more, and deploy once more just a single file to check if the error still appear and it is still there. What should i do ?FollowComment"
100.0 % of the requests are failing with HTTP 5xx.
https://repost.aws/questions/QUd0Ee0qDrR4y6tAgkCujJ1w/100-0-of-the-requests-are-failing-with-http-5xx
false
"0Hello,Are you using a load balancer ? Do you have check health url ?Can you check these 2 options :Verify health check url of load balancerVerify security group is allowed by ec2 security groupIf need more help, describe the architecture you haveCommentShareIsmahelanswered 5 months ago"
"Amazon RDS is starting the end of life (EOL) process for MariaDB major engine version 10.2. We are doing this because the MariaDB community is planning to discontinue support for MariaDB 10.2 on May 23, 2022 [1].Amazon RDS for MariaDB 10.2 will reach end of life on October 15, 2022 00:00:01 AM UTC. While you will be able to run your Amazon RDS for MariaDB 10.2 databases between community MariaDB 10.2 EOL (May 23, 2022) and Amazon RDS for MariaDB 10.2 EOL (October 15, 2022), these databases will not receive any security patches during this extended availability period. We strongly recommend that you proactively upgrade your databases to major version 10.3 or greater before community EOL on May 23, 2022. MariaDB 10.3 offers improved Oracle compatibility, support for querying historical states of the database, features that increase flexibility for developers and DBAs, and improved manageability [2]. Our most recent release, Amazon RDS for MariaDB 10.6, introduces multiple MariaDB features to enhance the performance, scalability, reliability and manageability of your workloads, including MyRocks storage engine, IAM integration, one-step multi-major upgrade, delayed replication, improved Oracle PL/SQL compatibility and Atomic DDL [3]. If you choose to upgrade to MariaDB 10.6, you will be able to upgrade your MariaDB 10.2 instances seamlessly to Amazon RDS for MariaDB 10.6 in a single step, thus reducing downtime substantially. Both versions, MariaDB 10.3 and 10.6, contain numerous fixes to various software bugs in earlier versions of the database.If you do not upgrade your databases before October 15, 2022, Amazon RDS will upgrade your MariaDB 10.2 databases to 10.3 during a scheduled maintenance window between October 15, 2022 00:00:01 UTC and November 15, 2022 00:00:01 UTC. On January 15, 2023 00:00:01 AM UTC, any Amazon RDS for MariaDB 10.2 databases that remain will be upgraded to version 10.3 regardless of whether the instances are in a maintenance window or not.You can initiate an upgrade of your database instance to a newer major version of MariaDB — either immediately or during your next maintenance window — using the AWS Management Console or the AWS Command Line Interface (CLI). The upgrade process will shut down the database instance, perform the upgrade, and restart the database instance. The database instance may be restarted multiple times during the upgrade process. While major version upgrades typically complete within the standard maintenance window, the duration of the upgrade depends on the number of objects within the database. To avoid any unplanned unavailability outside your maintenance window, we recommend that you first take a snapshot of your database and test the upgrade to get an estimate of the upgrade duration. If you are operating an Amazon RDS for MariaDB 10.2 database on one of the retired instance types (t1, m1, m2), you will need to migrate to a newer instance type before upgrading the engine major version. To learn more about upgrading MariaDB major versions in Amazon RDS, review the Upgrading Database Versions page [4].We want to make you aware of the following additional milestones associated with upgrading databases that are reaching EOL.**Now through October 15, 2022 00:00:01 AM UTC **- You can initiate upgrades of Amazon RDS for MariaDB 10.2 instances to MariaDB 10.3 or 10.6 at any time.July 15, 2022 00:00:01 AM UTC – After this date and time, you cannot create new Amazon RDS instances with MariaDB 10.2 from either the AWS Console or the CLI. You can continue to restore your MariaDB 10.2 snapshots as well as create read replicas with version 10.2 until the October 15, 2022 end of support date.October 15, 2022 00:00:01 AM UTC - Amazon RDS will automatically upgrade MariaDB 10.2 instances to version 10.3 within the earliest scheduled maintenance window that follows. After this date and time, any restoration of Amazon RDS for MariaDB 10.2 database snapshots will result in an automatic upgrade of the restored database to a still supported version at the time.January 15, 2023 00:00:01 AM UTC - Amazon RDS will automatically upgrade any remaining MariaDB 10.2 instances to version 10.3 whether or not they are in a maintenance window.If you have any questions or concerns, the AWS Support Team is available on AWS re:Post and via Premium Support [5].[1] https://mariadb.org/about/#maintenance-policy[2] https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-rds-now-supports-mariadb-10_3/[3] https://aws.amazon.com/about-aws/whats-new/2022/02/amazon-rds-mariadb-supports-mariadb-10-6/[4] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html[5] http://aws.amazon.com/supportFollowComment"
"Announcement: Amazon Relational Database Service (Amazon RDS) for MariaDB 10.2 End-of-Life date is October 15, 2022"
https://repost.aws/questions/QUPGswEbHrT0m4tNgAVNmssw/announcement-amazon-relational-database-service-amazon-rds-for-mariadb-10-2-end-of-life-date-is-october-15-2022
false
0[Announcement] Does not require an answer.CommentShareEXPERTIsraa-Nanswered 23 days ago
"Hello Team,Yesterday, one event occurred in my account that one of the EC2 instance went into a stopped state. So, I checked the CloudTrail log and I found nothing and also checked the system log and found nothing. The status Check of that instance is also green.Can anyone help me to find RCA?Thanks in AdvanceFollowComment"
Ec2 Instance in stopped state
https://repost.aws/questions/QUkv9TixocR0mXHOAY9aTf6g/ec2-instance-in-stopped-state
true
1Accepted AnswerDefinitely worth creating a support case for this to find out the root cause.CommentShareEXPERTBrettski-AWSanswered a year agoUttam Chauhan a year agoCan you please create a support case for this behalf of me?let me know if you required anything from my side.ThanksShare
"Please can someone help? I created a business AWS account earlier this week for my team to set up an SFTP. For some unknown reason, a few days later i am unable to sign into the account. When i try to sign in as a Root User, i receive the message 'Signing in with the root user is disabled for your account. You need to re-enable this feature.' When i try to recover my password, i receive the message 'Password recovery is disabled for your AWS account root user. You need to re-enable this feature.'. I can't seem to access the Support Center, as whenever i try, i am taken back to the login page. When i try to sign in as an IAM User, i receive the message 'You authentication information is incorrect'. When i click on 'Forgot Password', i receive the message 'Account owners, return to the main sign-in page and sign in using your email address. IAM users, only your administrator can reset your password.' I am literally going round in circles and can't seem to find a way into my account. Can anyone help? TIAFollowComment"
Log in Issues - AWS Management Console
https://repost.aws/questions/QUgZwyIHVqRjGdVXZA6ec8ow/log-in-issues-aws-management-console
false
"0Hi TherePlease submit a support request using this formhttps://support.aws.amazon.com/#/contacts/aws-account-supportCommentShareEXPERTMatt-Banswered 4 months ago0I would guess you are signing into a member account of an AWS Organization, and someone has enabled Service Control Policies in the Organization root AWS account to block root-user login for all member accounts.Try logging into the Organization root account to look at the Organization Service Control Policies.CommentShareGreg_Hanswered 4 months ago"
"I'm searching for a good way to automate migrating a DAG between multiple instances (staging/production) as part of a DevOps workflow. I would like to be able to run my DAGs in my staging environment with different configuration parameters (S3 bucket paths, etc.) and run the same DAG in my production environment without requiring a change to the DAG code (automate the migration).Here is what I'm considering:Set an environment variable in Airflow/MWAA instance as part of initial setup (e.g. env=staging, env=prod)Create json configuration file with staging and production configuration parameters and store it with the DAGsCreate a DAG that is a prerequisite for any DAGs which require configuration that checks Airflow environment variable and sets variables to staging/prod configuration parametersUse templated variables in DAGs requiring configurationIs there a better way to approach this? Any advice is appreciated!FollowComment"
"What's the best way to migrate DAGs between staging, prod environments?"
https://repost.aws/questions/QUHARB759uT-ay84ZyubHuGA/what-s-the-best-way-to-migrate-dags-between-staging-prod-environments
false
"0Hi, We've achieved this using the SSM Parameter store. created config in the parameter store and use plugins.py file to pull the configuration from SSM and set up environment variables.https://docs.aws.amazon.com/mwaa/latest/userguide/samples-env-variables.htmlCommentShareViresh Patelanswered 3 months agoPresgore 3 months agoThanks, your answer was helpful. Do the plugins, and subsequently the environment variables, get re-loaded only when the environment is built/re-built?Share"
"I am using the QuickStart Guide for a Lightsail cPanel & WHM server instance. All WHM settings are updated, including WHM password as per QuickStart guide. But when I attempt to log into cPanel (:2083) I am unsuccessful using the WHM login username and updated password. Attempts to request a cPanel password reset fail because the accepted emails setup on WHM are not accepted for a cPanel password reset. Any ideas ?FollowComment"
cPanel username and password - same as WHM ?
https://repost.aws/questions/QUM6WhE8cORqKI2XVZL9BuWA/cpanel-username-and-password-same-as-whm
false
"0Hi, @mcdsp.cPanel username and password - same as WHM?wrong.I introduced how to use WHM and cPanel below.Notice how I created a separate cPanel account along the way.You should have been issued a unique username and password for your cPanel account.https://dev.classmethod.jp/articles/amazon-lightsail-cpanel-whm-blueprint/If you can sign in to WHM, try editing your cPanel account.CommentShareEXPERTiwasaanswered 9 months agomcdsp 9 months agoHowdy iwasa,I did the steps you outlined and am at the point where I can publish my website. I found it annoying that the QuickStart guide provided in the cPanel & WHM based Lightsail server instance did not specify the need to also create a cPanel account, nor did it cover the need to update the static IP address of the WHM license (trial or regular) to make it work.Thanks for replying to my post.Shareiwasa EXPERT9 months agoyes. I too got confused the first time I used WHM & cPanel. I am happy to solve it.Please mark as Accept when the problem is solved.Share"
"I'm afraid I already suspect the CalDav answer (which kills WorkMail as a viable contender), but cannot find the specific admission anywhere:If we create a WorkMail calendar, is there a way for users with iPhones, Andriods, and other calendar software (like Thunderbird) to all work on it - using the same application in which they already manage their calendars?IOW, what would be the WorkMail endpoint which supports a iCalendar, ICS, CalDav or WCAP protocol?FYI - We do not use Microsoft's Exchange or client software.Thanks in advance,ALFollowComment"
Sharing our WorkMail calendar among desktop and mobile apps
https://repost.aws/questions/QUorYBNjF3SmSNQvSzQtFuBQ/sharing-our-workmail-calendar-among-desktop-and-mobile-apps
false
"1Hello everyone,Thank you for your feedback on WorkMail. I'm sorry to hear you're disappointed in the missing caldav functionality in WorkMail. WorkMail offers Exchange ActiveSync, EWS, and Exchange RPC protocols for the synchronization of calendar data to desktop and mobile clients.We know this doesn’t enable calendar synchronization for all email/calendar clients. We constantly evaluate our customers’ top needs so I will forward your feedback to the team.Kind regards,RobinCommentShareMODERATORrobinkawsanswered 3 years ago0Hi Al,I'm sorry to inform you that WorkMail does not offer the end-point you're looking for. I will forward this as a feature request to the service team.Kind regards,RobinCommentShareMODERATORrobinkawsanswered 3 years ago0WorkMail does not support any open calendar collaboration protocol. Thank you. Hopefully this post may help future developers learn this point faster.CommentShareA. K. Holdenanswered 3 years ago0Hi,that is really a big gap.For that low service level and that high price for a Workmail user (=Email) per month, I cannot recommend WorkMail and I am a AWS consultant. That would be ridiculous.I am using Workmail only for marketing purposes. But two user are totally enough for that purpose.RegardsJörnCommentShareyotronanswered 3 years ago0I am bit late to this, but also would like to ditto the original poster.This is a big gap and for the money we pay for workmail, I actually would expect to be able to sync my emails and calendars to any device I want. Supporting open calendar protocols should be top of the list for feature requests.Rant over, thank you.Edited by: LiXiaoPai on Jan 31, 2020 9:58 PMCommentShareLiXiaoPaianswered 3 years ago0Is there any news about this gap? We are analysing this need at this moment and we may abandon WorkMail because of this limitation.CommentShareneianswered 3 years ago01 vote for this feature!CommentSharekakokvantalianianswered 2 years ago0+1 for this. We use WorkMail in our organisation and also need calendar synchronisation across devices using native apps.PLEASE can we get this integrated?I really don't want to have to uproot ~30 users to a new system for emails/calendars when really for the price we pay, we should have this functionality!From what I have learnt from WorkMail it is based on Exchange 2013, is this going to be updated? I cannot use services such as Calendly either.TIACommentShareitswatsonanswered 2 years ago0+2CommentShareRJGanswered 2 years ago0To setup Email/Calendar synchronization in iOS and Android, follow: https://docs.aws.amazon.com/workmail/latest/userguide/mobile-client.htmlCommentShareDimaAWSanswered a year ago0Three years and still in the same situation. Is there any possibility to use "webcal" url in WorkMail web interface? It's really a nice feature... I need to import a calendar from an external Exchange Server and keep it constantly aligned.Another feature is the possibility to massively import contacts from the webmail... At least with CSV...Those two features may really improve the service: we cannot think to let user programming anything with the SDK.Any solution?CommentSharecarlo_graanswered 9 months ago0I agree with the above observations. In a modern agile work environment, this lack of functionality is business critical. At this price point, it's baffling... please develop. Unfortunately I'll have to take my business elsewhere if not...and that affects wider solution decisions...CommentSharerePost-User-7171354answered 7 months ago"
I ask because we're paying for multi-az but a recent RDS automatic update is taking all our sites down for 5 minutes each. Maybe this is rare multi-az defying update?but is there a log to show when the instance has failed-over or can i setup a cloudwatch alert for it?Anything i should be checking to make sure i'm getting my money's worth?Thank you.FollowComment
How can I see if RDS Multi-AZ is working?
https://repost.aws/questions/QUZ_yKz-g3Tdiq19jTQ6Y4Ag/how-can-i-see-if-rds-multi-az-is-working
false
"0Hello,Take a look at this Knowledge center articleCommentShareEXPERTTushar_Janswered 8 months ago0It depends on the type of RDS update. If it is DB Engine update, even if your RDS DB instance uses a Multi-AZ deployment, both the primary and standby DB instances are upgraded at the same time. This causes downtime until the upgrade is complete, and the duration of the downtime varies based on the size of your DB instance.Please refer below url for better understanding of RDS downtime during maintanance https://aws.amazon.com/premiumsupport/knowledge-center/rds-required-maintenance/DB failover/updates/restarts are recorded under the Events page of you RDS console. You can navigate the Events page and check the details. You can even set the notifications for these events using Amazon SNS. Please refer the below url for SNS setuphttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.htmlCommentShareJosephanswered 8 months ago0HelloYou can see it through Amazon RDS eventsTo identify the root cause of an unplanned outage in your instance, view all the Amazon RDS events for the last 24 hours. All the events are registered in the UTC/GMT time by default. To store events a longer time, send the Amazon RDS events to Amazon CloudWatch Events. For more information, see Creating a rule that triggers on an Amazon RDS event. When your instance restarts, you see one of the following messages in RDS event notifications:The RDS instance was modified by customer: This RDS event message indicates that the failover was initiated by an RDS instance modification.Applying modification to database instance class: This RDS event message indicates that the DB instance class type is changed. - Single-AZ deployments become unavailable for a few minutes during this scaling operation. - Multi-AZ deployments are unavailable during the time that it takes for the instance to failover. This duration is usually about 60 seconds. This is because the standby database is upgraded before the newly sized database experiences a failover. Then, your database is restarted, and the engine performs recovery to make sure that your database remains in a consistent state.The user requested a failover of the DB instance: This message indicates that you initiated a manual reboot of the DB instance using the option Reboot or Reboot with failover. The primary host of the RDS Multi-AZ instance is unhealthy: This reason indicates a transient underlying hardware issue that led to the loss of communication to the primary instance. This issue might have rendered the instance unhealthy because the RDS monitoring system couldn't communicate with the RDS instance for performing the health checks.The primary host of the RDS Multi-AZ instance is unreachable due to loss of network connectivity: This reason indicates that the Multi-AZ failover and database instance restart were caused by a transient network issue that affected the primary host of your Multi-AZ deployment. The internal monitoring system detected this issue and initiated a failover.The RDS Multi-AZ primary instance is busy and unresponsive, the Multi-AZ instance activation started, or the Multi-AZ instance activation completed: The event log shows these messages under the following situations: - The primary DB instance is unresponsive. - A memory crunch after an excessive memory consumption in the database prevented the RDS monitoring system from contacting the underlying host. Hence the database restarts by our monitoring system as a proactive measure. - The DB instance experienced intermittent network issues with the underlying host. - The instance experienced a database load. In this case, you might notice spikes in CloudWatch metrics CPUUtilization, DatabaseConnections, IOPS metrics, and Throughput details. You might also notice depletion of Freeablememory.Database instance patched: This message indicates that the DB instance underwent a minor version upgrade during a maintenance window because the setting Auto minor version upgrade is enabled on the instance.Reference - https://aws.amazon.com/premiumsupport/knowledge-center/rds-multi-az-failover-restart/CommentShareVetrivelanswered 8 months ago"
"I am trying to restore RDS Postgres from a snapshot and then increase its iops. I tried on Friday to restore it and increase the iops, and I let it run for 18 hours, and it never got to a finished state or even into optimizing storage state. I assumed something had gone wrong and I deleted it, and I tried again today. The same mostly happened today, except that this time I got an event that said 'The storage volume underlying the primary host of the RDS Multi-AZ instance experienced a failure.'FollowComment"
RDS Postgres Increasing IOPS Never Finishes
https://repost.aws/questions/QUh3LIvpuhR4GXeiflPu43Qg/rds-postgres-increasing-iops-never-finishes
false
"Please help with suggestions of what I can do to get my site back online.Encountered an Error 521 -- which in the past has meant just a quick stop and start of the server to get back online. Not today.I've stopped and restarted my Lightsail instance multiple times with no effect. I've rebooted as well, with no effect. I had noticed the ** Restart Required ** that last time I connected via SSH, just before stopping the server. Could the restart be required for some type of update, which has locked me up?UPDATE: I am also unable to connect to the phpMyAdmin though my SSH connection does "work". The normal method (I use this a lot) to connect is to SSH in, hit the http://localhost:8888/phpmyadmin/index.php url and then edit. Says: "This site can’t be reached The connection was reset" -- stopped and restarted the SSH connection (successfully), but still no joy.Help, please. Anyone, any suggestions on what to try now?CenaySite: https://www.cenaynailor.comEdited by: Cenay on Jun 26, 2019 11:15 AM - removed incorrect formatting on linkEdited by: Cenay on Jun 26, 2019 11:32 AMAdded new information about the phpMyAdmin issue now presentEdited by: Cenay on Jun 26, 2019 11:40 AMFollowComment"
"Error 521 and site won't come back up, despite restarting. Suggestions?"
https://repost.aws/questions/QUqS2S5ULvSuuCvzQzUWmwuw/error-521-and-site-won-t-come-back-up-despite-restarting-suggestions
false
"1Solved my own issue... sharing it just in case anyone else encountered something like this.So, about a month ago, I had added a small change to the htaccess.conf file in /opt/bitnami/apps/wordpress/conf to prevent a type of exploit I was seeing. The code I added is shown below:# Added to prevent exploits<Directory "/opt/bitnami/apps/wordpress/htdocs"># Block WordPress xmlrpc.php requests<Files xmlrpc.php>order deny,allowdeny from allallow from 67.61.95.121</Files></Direcory>```As you can see from the content above, the closing </Directory> tag was misspelled, causing Apache to not start. The reason it took a month to manifest, is because I didn't restart Apache or the server after making this change (stupid, I know) So, my advice. If you make a change, force your site to stop and restart and check it immediately. Don't wait a month until a random reboot introduces the error, as you won't know where it came from. Just my .02 worth.CommentShareCenayanswered 4 years ago"
"So I have a PHP web app on Elastic Beanstalk, and I'm trying to use the PHP AWS SDK to work with files on S3. So far I haven't got it working.Per this documentation: https://docs.aws.amazon.com/aws-sdk-php/v2/guide/installation.html#using-composer-with-aws-elastic-beanstalkI have the following in a composer.json file at the root level of my application:{"require": {"aws/aws-sdk-php": "3.*"}}Do I need something else in it? Like an autoload section or something?Since the aws.phar file is over 16MB, I don't want to include it in my app and have to upload it every time I make an unrelated update; from what I understand, Elastic Beanstalk already has Composer installed, and the above json should install the AWS PHP SDK on deployment.I'm thinking I need a require('/vendor/autoload.php'); or something in my code? If so, how do I know where the autoload.php file is on Elastic Beanstalk? Or do I need to create one? (And if so, what do I put in it?) When I used that line, I just get an error saying it can't find the file.I don't have a vendor folder in my application.FollowComment"
Using AWS PHP SDK on Elastic Beanstalk
https://repost.aws/questions/QUkI6PHcNLSmuDXOeGjv-c1g/using-aws-php-sdk-on-elastic-beanstalk
false
"0For future reference, I figured this out as follows.I had been getting "No composer.json file detected" in my logs.I discovered that because of the way I was zipping my package on macOS (using the native right click > Compress), it puts the files in a sub-folder inside the zipped package. So the composer.json was not at the "root" level of my package. I added it to the "root" level, and then once deployed I got the following in the logs:Found composer.json file. Attempting to install vendors. (...)And then the path to "vendor/autoload.php" was not in the directory with my application either (NOT /var/app/current/application-files/web/vendor), but closer to the root level (e.g. /var/app/current/vendor)Hopefully this saves someone else some headaches!CommentShareBen-in-CAanswered 3 years ago"
"Actually we have only Event script with a small payload to be sent, that excludes for example containers like a map.But a consumer of an Event should know/be aware on how to retrieve relevant data from other sources, like with big containers.It seems that it misses a kind of component to do it (like a kind of database component) ?And we don't have the possibilty to do it directly through globals for example.I only see actually to make it with the need to communicate back to the event producer entity, so we’ll need to create new event producers in our consumer entity and new consumers in our firstly producer entity to handle this kind of database query.This could lead to a significant increase in the number of events and complexity.What could be the best and easyest way to do it ?Follow"
A consumer of an Event aware on how to retrieve relevant data ... a business model barrier?
https://repost.aws/questions/QUkloTVpnYTtyfUcjPl9VSgA/a-consumer-of-an-event-aware-on-how-to-retrieve-relevant-data-a-business-model-barrier
false
"0It's now reasonable to think that if there's no answer about several questions about global variables in this forum, it's because we're at the border with Amazon services, that is in occurence the AWS Data Exchange ... through the fundamental introduction to building games on AWS using Amazon Lumberyard (free course e-learning training), it describes well the architectural benefits of integrating Lumberyard with other AWS services and should perhaps be the first step to take before using Lumberyard. I'm sorry I didn't follow it from the beginning :roll_eyes:SharerePost-User-3300607answered 3 years ago0Hi @REDACTEDUSERIt depends what your goals are, DynamoDB is a good place for global data storage if you need the data cloud based, which is often the case in many games nowadays. A client side system for persistent storage of user settings can be found in the SaveData gem, this is useful for things such as Save Games.If you need transient data storage that only lasts throughout the game or simulation session you can use Script Canvas with Script Events. You create an entity that works as the storage and has a Script Canvas graph that receives notifications of value updates through Script Events, you use these to update your variables or request their values.Hopefully one of these methods is useful for your goals, for me the best situation would be a "Persistent Data" gem that provides all of these services in a convenient location, unfortunately we don't have something like that at the moment.SharerePost-User-2322203answered 3 years ago0Hi Luis @REDACTEDUSERSharerePost-User-3300607answered 3 years ago0@REDACTEDUSERIt seems to me on the contrary that when you re-enter a script with calling through an Event, it has lost all the variables values that was set by a the previously assignement of it's local variables... all Variable are cleared when coming back into the script. 💥If it's really the case, unless there is a way to make these Script Variables persistent, we can't do what you propose as all local variables are lost when you try to read them.SharerePost-User-3300607answered 3 years ago0Hi @REDACTEDUSERI tried this locally just now and it appears to work correctly, I would have to know more how your scripts are being used, one thing you want to avoid is deactivating the entity that is holding your storage for example.For my test I used two scripts on two separate entities, one I called VariableStorage and one VariableQuery.VariableStorage:REMOVEDUPLOADVariableQuery:REMOVEDUPLOAD(Script Canvas) - STORED: 1.0000(Script Canvas) - QUERY RETURNED: 1.0000(Script Canvas) - STORED: 2.0000(Script Canvas) - QUERY RETURNED: 2.0000(Script Canvas) - STORED: 3.0000(Script Canvas) - QUERY RETURNED: 3.0000Edit: here's the Script Event setup I used:REMOVEDUPLOADSharerePost-User-2322203answered 3 years ago0I've seen @REDACTEDUSERSharerePost-User-2322203answered 3 years ago0@REDACTEDUSERBut in reality, when using global data, it's for relatively static data but bigger than a simple Variable number and we can't use even simple like Variable Arrays in the parameters inside the Node Event.As you asked more on the script I'm doing, one thing effectively I want is "to avoid deactivating the entity that is holding my storage ".In brief to tell you more about my scripts, I have a Parent-Children hierarchy of entities that call successively each other in cascade in order to require a treatment on a data at a below level.The event-driven approach is always the same with a raise of a request Event from the top Entity; then Entities Services that are listening to that Event raise a Confirmed Event when they have done their job after calling their chidren entities ... thus a cascade of Events that works perfectly well.As I wrote, when you come back with a new Event inside a script already activated during a precedent Event, it's local variables aren't persistent any more and those variables are numbers and Arrays of numbers (by the way, the "For Each" node could return the index value on Array ....)As when @REDACTEDUSERThus the scope of Variables and persistency is at the center of this discussion and should be covered by an event-driven design explainations with a kind of cook book/best practices in LY scripts that's currently missing from the LY doc.SharerePost-User-3300607answered 3 years ago0@REDACTEDUSERSharerePost-User-3300607answered 3 years ago0Mr @REDACTEDUSERSharerePost-User-8224505answered 3 years ago0Mr @REDACTEDUSERHundreds of artificial intelligence or 1,000 enemies artificial intelligence are attacking me. The grenade must examine/explode their distance and radius with the array of enemies positions(less 5 meter + foreach+Vector3D.Distance(Grenade.Position,arrary[index].Position)), if not we must think Sphere raycast or Cube raycast for GetListEnemies...I hope you understand what I mean, excuse me , I can not write english very well 😊SharerePost-User-8224505answered 3 years ago0For info, I'm already playing with vertex inside a mesh of an entity that serves as a big array of global/persitent data.It already exists a VertexContainer, that is a container of vertices of variable length, with all of what we need like get, set, clear, remove, ...Each vertex point of the mesh has a position that you can get/set through it's index and thus mesh can serve as a very big global Array.The starting point is here for updating the global Array with an index:/*** Update a vertex at a particular index.* @REDACTEDUSER* @REDACTEDUSER* @REDACTEDUSER*/virtual bool UpdateVertex(size_t index, const Vertex& vertex) = 0;or getting a value from this global Array/*** Get a vertex at a particular index.* @REDACTEDUSER* @REDACTEDUSER* @REDACTEDUSER*/virtual bool GetVertex(size_t index, Vertex& vertex) const = 0;It's in the VertexContainerInterface.h file.As an example, the VertexContainerInterface is also already implemented by the Spline component and Polygon Prism component EBuses.The simplest way is to reuse this example in Lua seen here https://docs.aws.amazon.com/lumberyard/latest/userguide/component-vertex-container.html ) ... thus reusing this EBus Request Bus Interface Example.This solution is uncompromisingly fast and easy to use and it's the raison why I proposed Luis @REDACTEDUSERSharerePost-User-3300607answered 3 years ago0Last solution but not least solution : we can introduce our glogal data inside tags, by using the tag component on Entities.Too those Entities could be generated through spawn in order to create things like Array or Map containers at the size that we need.For example, each Entity will bring with it several Tags that could simulate keys/data, and we have the existing node that allows us to get entities according to a tag like key in containers, and also delete, clear, ...REMOVEDUPLOADREMOVEDUPLOAD@REDACTEDUSERSharerePost-User-3300607answered 3 years ago0Mr @REDACTEDUSERSharerePost-User-8224505answered 3 years ago0Mr @REDACTEDUSERhttps://www.youtube.com/playlist?list=PLGiO9ZyED9TMH4fMy5rttmWQRq0EmJKjRSharerePost-User-8224505answered 3 years ago"
"Hi there, I'm used to work with an Autostop Workspace with no problems. The vm contains VS Code and other basic stuff.Yesterday I wanted to use it, but the client returns an error. The AWS console reports Starting status. It stays for quite some time (an hour or so), then it goes back to Stopped. Basically, I can't work anymore with this Workspace. I've tried to connect via RDP adding the firewall rule etc, but I got a connection refused error. I've tried via AWS CLI, but the result is the same. I have a basic subscription, so I can't open a support ticket.Any suggestion?Thank you.FollowComment"
Workspace stuck on Starting
https://repost.aws/questions/QUQrU8oeH3ROWR57mhRXYQ_w/workspace-stuck-on-starting
false
"0Thank you so much arun, your suggestions helped me: restore didn't work, putting the Workspace in Unhealthy state. Tried to connect, reboot, RDP, no luck. Finally, a rebuild solved my issue.I don't get why this happened, I use the Workspace few hours per week and without changing anything on its configuration. As a matter of fact, the service looks unstable by itself.Thank you again.CommentSharematroanswered 8 months agoArun_PC 8 months agoSorry to hear about your experience. This is more an exception in my experience. Usually what gets a Windows machine stuck are pending updates, misbehaving applications or the OS waiting for something to complete. Hopefully you could identify the root of the problem.Share0Matro,There is official guidance for such issues.https://aws.amazon.com/premiumsupport/knowledge-center/workspaces-stuck-statushttps://youtu.be/3m9tl0v579oIf the above steps do not work(as your RDP is inaccessible), I would usually ask to open a Support Case but in your case, that is not applicable.So, if your WorkSpace can afford lose data up to 12 hours, i will use try Restore and if that doesnt work, I will try a Rebuild.https://docs.aws.amazon.com/workspaces/latest/adminguide/restore-workspace.htmlhttps://docs.aws.amazon.com/workspaces/latest/adminguide/rebuild-workspace.htmlHope it helps,-arun.CommentShareArun_PCanswered 8 months ago0Same thing happen with me from last two days when i tried to connect my AWS Workspace it show in starting state for 1 hour or so and end with unhealthy state after that i reboot and i was able to connect it. But not yet got any permanent solution for this issue. If anyone have fix for this please share it with us....thanks in advance.CommentShareHarpreet-User-1830214answered 2 months ago"
"I am working on VA form 22-0803, reimbursement of licensing or certification test fees, and it requires a mailing address for the issuing organization. A few searches of various keywords and phrases hasn't gotten me anywhere. Does anyone have experience with this form or just happen to know the address AWS recommends for this? Thanks!FollowComment"
Does anyone know the AWS mailing address that can be used for VA reimbursement forms?
https://repost.aws/questions/QUnVDr5Vg5SYyz1p5xwoBbBg/does-anyone-know-the-aws-mailing-address-that-can-be-used-for-va-reimbursement-forms
true
"1Accepted AnswerYou can use the below address for the form:Amazon Web Services, Inc.410 Terry Avenue NSeattle, WA 98109KimAWS CertificationCommentShareKimWD-AWSanswered a year agogrungydan a year agoAwesome, thank you!Share"
"I am now using AWS Simple Email Service, and I have already verity the two of my own domains.However, on January 24, I switched from a .net domain to an .im domain, and then I had the problem described below.I also set up my own post office through Mailu, using the .im domain I purchased a few days ago. However, I find that I cannot receive emails from AWS Simple Email Service, and it seems to be a hard bounce according to the AWS SES Dashboard. But I have added the relevant DNS records according to Mailu's prompts, and I am receiving notification emails from AWS and other service providers, I just can't receive them from the AWS SES automated program, which seems odd.This problem only occurs when sending from AWS SES with an incoming email suffix of the .im domain I purchased a few days ago, emails sent to others via AWS SES seem to be working fine.I also sent a Ticket yesterday asking if I had received a restriction on this service for my account. However, AWS replied that I was not restricted and they recommended that I escalate the work order to a technical work order or ask the community for help.Maybe I should update my DNS record? How can I solved this problem, and thanks for Community's help.FollowComment"
Can't receive Email send from AWS Simple Email Service
https://repost.aws/questions/QUKPPd5BZPQKmvhU8aQEcdMA/can-t-receive-email-send-from-aws-simple-email-service
false
"Hello, this morning when I connected trough RDP I noticed long laggy connection. I have restarted instance and had problem with getting in for an hour+. Finally I got inside, but it is very slow basically in every task, like opening folder. CPU is at just few % of utilization, and lots of available ram. I found other topics saying that it may be due to lack of CPU credits, but in monitoring page I see usage is at 0.859, and balance at 655.This is t2.2xlarge instance with Windows Server 2019.FollowComment"
Windows Instance slow since today
https://repost.aws/questions/QUNDpyXfBcRT6y-2_RWqXx2w/windows-instance-slow-since-today
false
"0Hi there!thank you for posting your concern here.In this case there are many possible causes of this problem, can be a problem with an external service that your instance relies on, also disk thrashing and can be network connectivity issues, I suggest that maybe you check the EBS volume metrics since Amazon Elastic Block Store (Amazon EBS) sends data points to CloudWatch for several metrics. Amazon EBS General Purpose SSD (gp2), Throughput Optimized HDD (st1) , Cold HDD (sc1), and Magnetic (standard) volumes automatically send five-minute metrics to CloudWatch.You can use this link also to do further trouble shooting https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-slow-cpu-not-high/I hope this will help.Thank youTLCommentShareThabo-Lwazi-Mziyakoanswered 3 years ago"
"component deploy to a thing group-->boot a unused pi, and let it to be a core device in the group-->deploy the pi with all the component that had been deployed in the grouphow can i do to accomplish the last step?FollowComment"
[greengrass]deploy the component from thing group to thing
https://repost.aws/questions/QU3849-hGyRb6iZumMsZoiEw/greengrass-deploy-the-component-from-thing-group-to-thing
false
"1Hi hy_galen.https://docs.aws.amazon.com/greengrass/v2/developerguide/manage-deployments.htmlDeployments are continuous. When you create a deployment, AWS IoT Greengrass rolls out the deployment to target devices that are online. If a target device isn't online, then it receives the deployment the next time it connects to AWS IoT Greengrass. When you add a core device to a target thing group, AWS IoT Greengrass sends the device the latest deployment for that thing group.Are you observing different behaviour?CommentShareEXPERTGreg_Banswered 9 months ago"
"For applications that are not built to use read replicas and send writes to a master connection, is there an option to have the read replicas pass on all write queries while handling reads themselves?The multi-master mode might work for this but it sounds risky if the code doesn't divide writes between the nodes to avoid conflicts.FollowComment"
read replicas with legacy code that doesn't properly separate queries
https://repost.aws/questions/QUQ4vQJ5bRSK2lbYN0Q62-Tg/read-replicas-with-legacy-code-that-doesn-t-properly-separate-queries
false
"0I was working on read-write separation (Aurora 4 MySQL) for quite some time and as far as I can tell read replicas do not have such an option.If the implementation of query routing is not an option, then you may consider using third-party RW splitters, like ProxySQL, Apache Sharding Sphere (https://shardingsphere.apache.org/document/current/en/features/read-write-split/) orHeimdall proxy (https://aws.amazon.com/blogs/apn/using-the-heimdall-proxy-to-split-reads-and-writes-for-amazon-aurora-and-amazon-rds/).HTHCommentShareepNIckanswered 3 years ago0Excellent options, thanks!CommentSharerichardgvancouveranswered 3 years ago"
"Onboarded native Delta table usingCREATE EXTERNAL TABLE [table_name]LOCATION '[s3_location]'TBLPROPERTIES ('table_type'='DELTA');Works great when I query it. However, when I rundrop table [table_name]I get the following error:"Routed statement type 'DROP_TABLE' to DeltaLakeDDLEngine, expected to route to DATACATALOG_DDL_ENGINE"FollowComment"
How do I drop native Delta tables from Athena catalog?
https://repost.aws/questions/QUjDk1linASzmLfUU6NYad3Q/how-do-i-drop-native-delta-tables-from-athena-catalog
false
"1Deleting from the Glue UI or using Glue API worksCommentSharerePost-User-5062491answered 5 months agoEXPERTFabrizio@AWSreviewed 5 months agoFabrizio@AWS EXPERT5 months agosee the documentation to review the DDL support: https://docs.aws.amazon.com/athena/latest/ug/delta-lake-tables.htmlSharebig-data-user-1234 a month ago@fabrizio this answer is okay as a manual workaround, but I have workloads that rely on the athena API only. There is no feasible way for me to change the underlying code so your workaround is no good for my use case.Is there any way to raise this issue with the AWS team? The specific ask is "support dropping delta tables from athena via SQL command". ThanksShare"
"When trying to access my website I get the error: "ERR_CONNECTION_TIMED_OUT".When trying to investigate and connect to my EC2 instance, I get the following errors:A connection to the instance could not be establishedThis instance type is not supported by the EC2 Serial Console.As additional information, the instance and the site were only working a few days ago, I have not made any changes neither on the instance nor on any configuration file.FollowComment"
A connection to the ECC2 t2 micro instance could not be established
https://repost.aws/questions/QUsxBfYkiVROCI5SBnEcHGPA/a-connection-to-the-ecc2-t2-micro-instance-could-not-be-established
true
"1Accepted AnswerHello there!The connection timed out error could be due to various factors such as:Your security groups and network access control lists (NACLs) blocking traffic to and from your website.The operating system has a firewall.There is a firewall between the client and the server.The host does not exist.The following AWS video helps to solve the above mentioned factors:https://youtu.be/TAHafjKM3FUPlease note that the above is a third-party video, therefore you must test the recommendations in a test environment before applying them on production.AWS ec2 system level logs provides more details on operating system level problems such as use of proxy or system being overloaded till it fails to process the incoming requests on time (see reference 1).To resolve this, you can configure the inbound traffic for your website to allow traffic from your ip address or your selected network of addresses in your security groups and NACLs (see reference 2).The reason for the second issue is that the instance type you are trying to use is not supported for the EC2 serial console.For supported instance types, please refer to the documentation on the provided link (see reference 3). To connect to your instance using the EC2 serial console, the instance type must be built on the AWS Nitro System, excluding bare metal instances.For information about Instances built on the Nitro System, please see reference 4.You can change the instance type by following the documentation (see reference 5). References:[1] https://docs.aws.amazon.com/managedservices/latest/userguide/access-to-logs-ec2.html[2] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html#TroubleshootingInstancesConnectionTimeout[3] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-to-serial-console.html#sc-prerequisites[4] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html?icmpid=docs_ec2_console#ec2-nitro-instances[5] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-resize.htmlCommentShareAsenathianswered a year ago"
"Hi there,I am planning to create a Transit gateway and there would be 5 VPCs which I will be attaching to the Transit Gateway. What would be the pricing of the transit gateway in that case?FollowComment"
Transit Gateway Pricing for various environments that I have
https://repost.aws/questions/QUaAhq7NuBQEqIis3Uz573Zg/transit-gateway-pricing-for-various-environments-that-i-have
true
"4Accepted AnswerAWS Transit Gateway Charges you, here is the calculation for TGW Attachment for 5 VPCs as I assume that you would be having at least 5 attachments in that case.Price per AWS Transit Gateway attachment ($)$0.07Price per GB of data processed ($)$0.02730 hours in a month x 0.05 USD = 36.50 USD (Transit Gateway attachment hourly cost)1 GB per month x 0.02 USD = 0.02 USD (Transit Gateway data processing cost)36.50 USD + 0.02 USD = 36.52 USD (Transit Gateway processing and monthly cost per attachment)5 attachments x 36.52 USD = 182.60 USD (Total Transit Gateway per attachment usage and data processing cost)Total Transit Gateway per attachment usage and data processing cost (monthly): 182.60 USDAlong with that you will be paying for the Data Processing as well with 0.03 USD/GB. Please note that the pricing may vary based on the AWS Region that you will select.References: https://aws.amazon.com/transit-gateway/pricing/CommentShareGovind Kumaranswered a month agoEXPERTalatechreviewed a month ago"
"Why s3 batch operation copy from one bucket to another in the same region takes a long time?I try to copy 8000 object with average size of 900 kb (using csv manifest with source bucket and s3 key, bucket doesn't have versioning) and job was in active status 69 hours and didn't complete so I just canceled the job.FollowComment"
s3 batch operations takes to long
https://repost.aws/questions/QUFqLci2qFRGmI62a96adJOg/s3-batch-operations-takes-to-long
false
"1Hi ThereTake a look at this previous answerhttps://repost.aws/questions/QUzH_mXBObTuO9iq_pvJkkoA/s-3-batch-operations-job-stays-activeA batch job performs a specified operation on every object that is included in its manifest. A manifest lists the objects that you want a batch job to process and it is stored as an object in a bucket. After you create a job, Amazon S3 processes the list of objects in the manifest and runs the specified operation against each object.When a S3 batch job is "active" it means Amazon S3 is performing the requested operation on the objects listed in the manifest. While a job is Active, you can monitor its progress using the Amazon S3 console or the DescribeJob operation through the REST API, AWS CLI, or AWS SDKs.The performance/speed of any particular Batch Operations job will depend on a variety of factors, including :The number of objects in the manifestThe type of operation (Glacier restore, copy, Lambda function, etc)The number of active jobs your accountThe number of enqueued jobs for your accountOther traffic to the source and/or destination bucketThe size of the objectsYou can refer to the following link for information on troubleshooting S3 batch jobs: https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-batch-operations/If you need additional support, then you can also open a case with AWS Premium Support to review the details of this particular job.CommentShareEXPERTMatt-Banswered 8 months agoAgan 8 months agoThank you for your response.So it's terns out that is's pretty useless tool? Cause I just test on 132 objects and it already 17 hours in active statusShare"
"I have created a IAM user in Account A and want to access a bucket in Account B.I have added the bucket policy and cross account policy to the IAM user.I have created the client ID and secret key for the IAM user and tried accessing the s3 using the pre-defined actions in SNOW.Unable to list the buckets in Account B, i am able to list the buckets only in Account A as the IAM user is created in Account AFollowComment"
cross account access using servicenow - amazon s3 spoke
https://repost.aws/questions/QUm1TuQQ_KQzaW2sThlNAUHg/cross-account-access-using-servicenow-amazon-s3-spoke
false
"0IAM user in Account A contains an ARN that must be in the target bucket policy. IAM Users have no trust policy required for trust, IAM User in Account A must also have the proper S3 IAM Privileges to action upon the specific resource being plotted in Account B.Using the AWS CLI make sure you configure it to point to the profile in Account A. IAM User Account A, performing aws s3api list-objects --bucket $target_bucket_name (bucket within account B)Then you can validate that your access is correct so that IAM User in Account A has access to Account B bucket.CommentShareD Ganswered 2 months ago0There are two ways to do it.A) You can use the S3 bucket policy if you don't wish to give up your current role and assume another role. You can refer to this linkhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example2.htmlB) If assuming another role is not an issue for you then you can utilize a cross-account role. Please refer to this linkhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example4.htmlCommentShareGauravanswered 2 months ago"
Do we have an issue on the redshift resume function today?I resumed a 1 node dc2.large for testing and its been almost an hour in modifying (resume) state. Normally it resumes after 5 to 10 minutes.ThanksFollowComment
redshift resume issue today
https://repost.aws/questions/QUii2d_MEWQf6uMQ2mig6Z3Q/redshift-resume-issue-today
false
"0I assume that you checked the cluster status on the Redshift console. In that case, it's possible that your cluster was already available but the Redshift console didn't update it until an hour later.My workaround: use AWS CLI to check the cluster status every couple of minutes. For example, run this command:aws redshift describe-clusters --cluster-identifier 'cluster_identifier/name' --query 'Clusters[].{ClusterIdentifier:ClusterIdentifier,ClusterStatus:ClusterStatus,ClusterAvailabilityStatus:ClusterAvailabilityStatus}' --region XXX --profile YYYand you'll see something like[ { "ClusterIdentifier": "cluster-name", "ClusterStatus": "available", "ClusterAvailabilityStatus": "Available" }]This cluster status result from AWS CLI is apparently more reliable than the Redshift console.If the status shows "resuming" for a long time: it means the resume process indeed takes longer and you should reach out to the AWS Premium Support.If the status shows "available" shortly in 5~10min: your cluster is available now, try connecting to it with SQL client apps. When Redshift console shows the cluster is resuming, you can't use the Redshift Query Editor.Otherwise if the below workaround doesn't work for you, or you are seeking for a permanent fix, open a case to the AWS Premium Support, Redshift team.CommentShareAWS-supipianswered a year ago"
"Hello,I wanted to know how can I integrate AWS S3 to Azure Cognitive Services, specifically a private S3 URL, can I use it to get the data from S3 that can be used as an input to one of the Azure services? Please let me know. Thank you.FollowComment"
How to integrate AWS S3 and Azure Cognitive Services (OCR)?
https://repost.aws/questions/QUsJ08JDmRRVuogo8NIKQz6A/how-to-integrate-aws-s3-and-azure-cognitive-services-ocr
false
"0Hi.I understand that you want to use private Amazon S3 directly as input to Azure Cognitive Services.You cannot specify a private Amazon S3 URL directly to the Vision API.It must be a public URL that the Vision runtime in Azure can access.Instead you can use the same approach for sending local images.https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/how-to/call-read-api#submit-data-to-the-serviceIn the above, image data is specified in binary in the request body when sending local data.Your application should use IAM to retrieve data from a private S3 bucket and send the data as binary before calling the Vision API.CommentShareEXPERTiwasaanswered 4 months ago"
"Hi,I've added a new secrets for my Aurora RDS. In the secret, I see "host" entry which points to the writer node. How do I get the host information for the read-replica node? I could add it manually, but I don't want to :)Thanks!!FollowComment"
How to get the read-replica RDS host info (multi-az scenario)?
https://repost.aws/questions/QUDFTJaGHCR7a_QL8rekOq8g/how-to-get-the-read-replica-rds-host-info-multi-az-scenario
true
"0Accepted AnswerSecrets Manager populates the host field with the writer node/master because if rotation is turned on, it needs to connect to the master to update the password.The RDS DescribeDBClusterEndpoints call can be used to find the other endpoints.https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusterEndpoints.htmlCommentShareAWS-User-9028130answered 3 years ago"
"hi there,I have an issue that I've been searching high and low for days to find an answer.I have a wordpress website which is closed to the public, requires paid membership.I have videos playing through a wordpress plugin called Ultimate video player thatI've tried to send requests to play the videos stores in my S3 bucket.It works IF the bucket is set to fully public. These videos are private and should not be viewable by the public.It does not work IF I set the bucket to private (the Access reads Only authorized users of this account)and place a bucket policy in permissions as follows:{"Version": "2012-10-17","Id": "http referer policy example","Statement": [{"Sid": "Allow get requests originating from mywebsite.ca.","Effect": "Allow","Principal": "","Action": ["s3:GetObject","s3:GetObjectVersion"],"Resource": "arn:aws:s3:::my-bucket-name-here/","Condition": {"StringLike": {"aws:Referer": "https://mywebsite.ca/*"}}}]}I have not done anything at the server where my website is hosted. The tech guys there said theirreferers should be OK for this request.I consulted with the developer of the video player and he said the issues lies with AWS S3 system.So here I am hoping to get some help about how to make this work.I am using a wordpress app (S3 smart upload) that has successfully connected to the S3 bucket and displays all the files and folders accurately.I am able to add each of the videos to the media library so that the video player can access them.BUT, when I add them to the media library, they give an error and do not play."Media error: Format(s) not supported or source(s) not found" (the videos are mp4 and play just fine when the bucket is set fully to public)what am I missing?Are the tech guys hosting my website missing something?How do I check if the AWS:referer is setup correctly?I agree with the video plugin developer - the problem is not with his video player, as when the S3 bucket is set to public, the player has no trouble playing the videos.Thanks for any help.FollowComment"
Private S3 bucket and http referer policy for a Wordpress site not working
https://repost.aws/questions/QUiYWzzKZYQL6Go81JZPOOoQ/private-s3-bucket-and-http-referer-policy-for-a-wordpress-site-not-working
false
"0I discovered the problem.First of all, I didn't want to allow public access but I needed to allow one of the permissions as follows in the Block public access (bucket settings).See attached image . http://www.nutopia.cc/Files/Capture.JPGThat solved my issue. Now, the direct link is not available except through the website specified in the bucket policy.So if you try to load the URL of the video, say, in the browser, it doesn't work, but my wordpress site IS accessing and playing the videos.Way to go self.MDEdited by: Modan9 on Nov 30, 2020 12:31 PM forgot to insert the image linkCommentShareModan9answered 3 years ago0Did you get this resolved?CommentShareYellowCodinganswered 2 years ago"
"I am new to AD, and am trying to add Users and Groups to the AD I created. I understand that I first need to create Users OUs but I cannot create that either. I've attached images.When I go to Windows > Administrative Tools > Active Directory Users and Computers, I get a message that says "To manage users and groups on this computer, use Local Users and Groups" (see Image1 attached). When I go to Windows > Administrative Tools > Administrative Center, I get a message saying "Your account or computer is not joined to any domain. Join to a domain and try again." (see Image2 attached). But I followed the instructions found here: https://docs.aws.amazon.com/directoryservice/latest/admin-guide/launching_instance.html. Furthermore some troubleshooting attempts show that the EC2 is joined (see Image3 and Image4). The Windows has the proper EC2DomainJoin Role with the 2 Policies attached (AmazonSSMManagedInstanceCore and AmazonSSMDirectoryServiceAccess). What am I missing?Edited by: AdminNewProject on Feb 19, 2021 11:35 AMFollowComment"
"Unable to create Users, Groups, or OUs"
https://repost.aws/questions/QU1zFmZdEJSJ68qw9s5cL26w/unable-to-create-users-groups-or-ous
false
"1Looks like you logged into the instance as a local user, probably "Administrator". Instead you will need to login as a domain user. By default we provide a user named "Admin". To switch to a domain user instead of a local user you can put the domain short name (NetBIOS name) at the front of the username like so, "NetBiosName\Admin". Looking at your screenshots I assume your NetBIOS name might be ActiveDirectory, if so then the user name would be "ActiveDirectory\admin". If you do not remember the Admin password you can reset it.https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_manage_users_groups_reset_password.htmlCommentShareJoeD_AWSanswered 2 years agoEXPERTJeremy_Greviewed a year ago0Thank you!!! Your explanation was incredibly helpful and I was able to create Users. The only thing is that the link goes to a page about Resetting a User's Password. I simply could not find the explanation you provided anywhere in the documentation. I had a hunch that I was logging in as the wrong user, but I couldn't figure out how to log in correctly. AWS provides a lot of very detailed information, so I don't know if I missed it. If I didn't, maybe that should be added?Edited by: AdminNewProject on Feb 20, 2021 7:29 PMCommentShareAdminNewProjectanswered 2 years ago0Thank you!! I was struggling with this for so longCommentSharekrishcanswered 2 years ago"
"I am looking to write a query CloudWatch log insight query to get application accessed browser. Not able to find any helpful documentation. Any suggestions or helpful guide to get it over.FollowCommentvinodaws 9 months agoIt depends on your logging. Cloudwatch Insights can only query on your logs in log groups. If your log group contains meaningful info about the browser activity - you can frame insight query based on that. "The query syntax used by Cloudwatch Insights supports different functions and operations that include but aren't limited to general functions, arithmetic and comparison operations, and regular expressions", so it totally depends on what you are loggingShare"
CloudWatch log insight query to get application accessed browser
https://repost.aws/questions/QU6GseAMryTKOUZdmsOcxGyA/cloudwatch-log-insight-query-to-get-application-accessed-browser
false
"I'm now using an EC 2 instance with IIS+Windows Authentication and use ALB sticky session. However, there is always a logout problems when refreshing the browser for 4-5 times.This is how I set up sticky sessions as below:Target groups -> xxxx -> Attributes -> EditTarget selection configurationStickiness: enableStickiness type: Load balancer generated cookieStickiness duration: 1 daysFollowComment"
How ALB sticky session sets with IIS+Windows Authentication
https://repost.aws/questions/QUv-XaIZscTuem3je7pyvCjQ/how-alb-sticky-session-sets-with-iis-windows-authentication
false