Description
stringlengths
6
76.5k
Question
stringlengths
1
202
Link
stringlengths
53
449
Accepted
bool
2 classes
Answer
stringlengths
0
162k
"In my ASP.NET MVC running on EC2, I get the following exception when accessing my webapp:[NullReferenceException]: Object reference not set to an instance of an object. at System.Web.HttpApplication.GetNotifcationContextProperties(Boolean& isReentry, Int32& eventCount) at System.Web.HttpApplication.PipelineStepManager.ResumeSteps(Exception error) at System.Web.HttpApplication.BeginProcessRequestNotification(HttpContext context, AsyncCallback cb) at System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context)As you can see, the error is in the .NET framework, not thrown directly from my Global.asax.cs.Here is my Global.asax.cs:public class MvcApplication : System.Web.HttpApplication{protected void Application_Start(){ ... AWSXRayASPNET.RegisterXRay(this, "authorize-qa"); } }I've confirmed the X-Ray Windows Service is running on my instance.Edited by: LuisLDQ on Oct 21, 2019 10:43 AMAdded line about X-Ray Windows ServiceFollowComment"
X-Ray causes my app to throw a NullReferenceException from ASP.NET MVC code
https://repost.aws/questions/QUpMAuDxgKQk-m9uuuvSHkuw/x-ray-causes-my-app-to-throw-a-nullreferenceexception-from-asp-net-mvc-code
false
0My problem was I wasn't putting the call to the SDK in the Init method.CommentShareLuisLDQanswered 4 years ago
When can we expect DynamoDB global tables availability in ap-south-2?FollowComment
DynamoDB global tables availability in ap-south-2
https://repost.aws/questions/QUOcUg8Z3ZSpSEKdEf1FMl4A/dynamodb-global-tables-availability-in-ap-south-2
false
"2Unfortunately DynamoDB is not yet available in ap-south-2, however we are working on enabling new regions. As Nitin mentioned, there is not ETA on when Global Tables will be available in ap-south-2. However, we do consider customer demand when prioritizing which regions to add new features to ensure we provide our customers with the services they desire, as such, I will ensure to add your request as an influence to release Global Tables in ap-south-2.Please do keep an eye on our Whats New blog for announcements on feature releases or follow DynamoDB on Twitter to stay up t date with the latest news.CommentShareEXPERTLeeroy Hannigananswered 6 months agoEXPERTAWS-User-Nitinreviewed 6 months ago"
"Since roughly April 2023 new backend environments fail to build automatically in AWS Amplify when new branches are created.It used to work like a breeze, but now I get the following error and only manual creation from command line is working:The solution to add manually the parameters in the SSM ParameterStore doesn't work, as the environment fails to be created, so at every run it has a different name, and in any case it's a pain to add manually parameters for each new backend environment.Does anyone found a similar issue?[INFO]: 🛑 This environment is missing some parameter values.[INFO]: [appId,type] do not have values.Resolution: Run 'amplify push' interactively to specify values.Alternatively, manually add values in SSM ParameterStore for the following parameter names:FollowComment"
"Error in building new backend environment in AWS Amplify: This environment is missing some parameter values. [appId,type] do not have values."
https://repost.aws/questions/QU-AbtfdDVRviZsmeB4ni9JQ/error-in-building-new-backend-environment-in-aws-amplify-this-environment-is-missing-some-parameter-values-appid-type-do-not-have-values
false
"I have a Greengrass v2 core device set up and running, and I have subscribed to telemetry data from it, following the docs here: https://docs.aws.amazon.com/greengrass/v2/developerguide/telemetry.htmlHowever, it produces almost no data. I forward the events to CloudWatch logs, and often it can pass several days between each entry.In the docs it says that it tries to send a MQTT message every day with QOS 0, so some misses could be expected. But I have had constant internet connection all the time.Also, getting these metrics only "maybe once a day" seems very low. Is there no way to increase it?Thanks!FollowComment"
Greengrass v2 - Almost no telemetry data
https://repost.aws/questions/QUYeIuZxp2Sx6Xr1cyk6briw/greengrass-v2-almost-no-telemetry-data
true
"0Accepted AnswerHi,Unfortunately it is not possible to set it more than once per day. If you want additional or more frequent information you can write a component which sends any information that you want at the speed that you want.Cheers,Michael DombrowskiCommentShareEXPERTMichaelDombrowski-AWSanswered 2 years ago0Hello,In Greengrass Nucleus v2.1.0 you can now set the publishing interval. Please see https://docs.aws.amazon.com/greengrass/v2/developerguide/greengrass-nucleus-component.html#greengrass-nucleus-component-configuration-telemetry for more information.Cheers,Michael DombrowskiCommentShareEXPERTMichaelDombrowski-AWSanswered 2 years ago0Thank you for the answer!It seems like the minimum publishing interval is once per day. Is there a reason for this? (if you're talking about "periodicPublishMetricsIntervalSeconds")CommentShareFredrikManswered 2 years ago"
"Hi Everyone,I tried to do remote re-indexing(Both domain's are in Opensearch 1.1 ) and received below error message.{"error" : {"root_cause" : [{"type" : "null_pointer_exception","reason" : null}],"type" : "null_pointer_exception","reason" : null},"status" : 500}Originally I thought _source = false is set in my indexes, so I tested on new indexes that their _source was explicitly set to true , but still was receiving the same error message.PUT my-index-000002{"mappings": {"_source": {"enabled": true}}}I would appreciate if you help me on this.ThanksAspetFollowComment"
Remote reindex API AWS ElasticSearch Null Pointer Exception
https://repost.aws/questions/QUivVEMNtCShKerxW0ulgaQA/remote-reindex-api-aws-elasticsearch-null-pointer-exception
false
"1Hi there, "null_pointer_exception" is generally seen when there is no proxy in front of remote domain. You need to have a proxy in front of the remote domain (domain which has the index that needs to be reindexed) to be able to reindex it to the local domain from which you are issuing the request.Even if both the domains are in the same VPC, the request when trying to reach the remote host, is considered to be external and does not resolve / authenticate to be able to reach the remote domain. Hence the need for proxy in front (1). If this is already setup, make sure the proxy domain has a certificate signed by a public certificate authority (CA) as self-signed. If you continue to run into same issue please open a support case with AWS premium support so that appropriate solution can be provided after troubleshooting.(1) https://docs.aws.amazon.com/opensearch-service/latest/developerguide/remote-reindex.htmlCommentShareSUPPORT ENGINEERHarshith_Manswered a year ago"
"I'd like my IAM role for an EC2 instance to only be assumed based on the instance's tags. Specifically, I have an Environment tag, and I only want e.g. an EC2 instance tagged with Environment=production to be able to assume my production role.I'm attempting to do this via the IAM role's trust policy, but have not been able to build policy that allows for this. I've tried several variations of:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:ResourceTag/Environment": "production" } } } ]}I've tried aws:ResourceTag, ec2:ResourceTag, aws:RequestTag, aws:PrincipalTag, and probably a few others to no avail.When I have this condition in the trust policy, the EC2 console doesn't complain when I attach the profile to the instance, but the AWS CLI on the instance can't find any credentials unless I remove the condition from the trust policy.Is there some reason the instance's tags are not usable in a trust policy? Is there another way to restrict EC2 assuming a role based on the instance tags?FollowComment"
How can you restrict EC2 instances to assuming an IAM role based on the instance's tags?
https://repost.aws/questions/QUKzSqeBjbTwmhGCQCNl8DDg/how-can-you-restrict-ec2-instances-to-assuming-an-iam-role-based-on-the-instance-s-tags
false
"0Based on my experience with IAM, what you're looking to achieve isn't possible with existing functionality. Instead, I recommend creating and utilizing separate AWS accounts for production, staging, development, etc., and allowing only authorized individuals or teams access to those accounts. This is consistent with the AWS best practices recommendations we give to customers regularly, and ensures access to data and other resources is less likely to be accidentally granted to unauthorized parties.CommentShareEXPERTMichael_Fanswered a year agoderelk a year agoThanks, but missing the point a bit. What if it wasn't environment, but cost center, or owner? There are pages and pages of AWS documentation about controlling access with tags, but in practice it doesn't seem to work anywhere that I've needed it.Share0I think I got what you are trying to achieve but not sure if the condition statement is in the right place here.In order to assign permissions to an Amazon EC2 Instance, you need to assign a IAM role to this EC2 Instance. Amazon EC2 uses an instance profile as a container for an IAM role.See also the note for Instance Profiles: An instance profile can contain only one IAM role, although a role can be included in multiple instance profiles. This limit of one role per instance profile cannot be increased.What I would recommend is to look into:Check out Tags for Instance Profiles to determine the matching role according to the EC2 InstanceUse condition statements in the IAM policies included with the Instance Profile. This approach would depend on the required policy, keep in mind quotas for IAM entities.CommentSharekunztanswered a year ago"
"Hi,I have simple lambda code:func handleLambdaEvent(ctxOrg context.Context, request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {orgDL, _ := ctxOrg.Deadline()fmt.Println(orgDL)when its deployed to aws and called ( from API Gateway )I see in logs current timestamp + X seconds,but when I run this locally and use awslambdarpc to call it I see:1970-01-01 00:00:00 +0000 UTCwhich causes some problems in other parts of this lambda.Is it possible somehow, using awslambdarpc or maybe some config ( just for local ) to define this deadline value ?FollowComment"
"Aws SDk Go V2 Lambda, deadline is 1970-01-1 while local debugging"
https://repost.aws/questions/QUcheUbOSJS-yWBxbXh5D1tw/aws-sdk-go-v2-lambda-deadline-is-1970-01-1-while-local-debugging
false
"0I think it's very odd that the timestamp is the beginning of the Unix epoch. There's been a few other questions which I think are related given that the console in those cases also looks like it is set to January 1, 1970. Please contact our support team to check on the status of the account.CommentShareEXPERTBrettski-AWSanswered 2 months ago0ok looks like original awslambdarpc client is simply missing Deadline value set, so I prepared this default deadline to now()+15 seconds in my fork:https://github.com/goodsafe-eu/awslambdarpc/commit/b6181a1acc581c12945c7361a3a3e4d5ad5f59c4and now works fineCommentShareBakuanswered 2 months ago"
"I did not ever used to have this problem. I have auto deploy set up from a github repository. When I make a push to the repo, the provision-build-deploy process gets completely stalled and never even provissions.Amplify introduces the dockerfile with the following:We are provisioning your build environment with a Docker image on a host with 4 vCPU, 7GB memory. Each build image gets its own host instance, ensuring all resources are isolated. The contents of our Dockerfile are displayed below for your information.But nothing ever happens! It is stuck on provision!FollowComment"
Amplify stuck on provision...every time! Cannot deploy new website versions.
https://repost.aws/questions/QUU-9jox4VTYOm9jIPt3P1cA/amplify-stuck-on-provision-every-time-cannot-deploy-new-website-versions
false
"Simple Storage Service is active in my Free Tier. I checked all options to delete this service but unable, but then deleted the object (yaml file) under StorageCan anyone let me know the procedure to delete this service?FollowComment"
Delete Simple Storage Service
https://repost.aws/questions/QUf-1VXSCASyGnchh1kw5t2Q/delete-simple-storage-service
false
"1S3 deletion cannot be performed if there are objects in S3.If all objects have been deleted, they can be deleted by following the steps in the following document.https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.htmlIf versioning is enabled, it is also a good idea to check for old versions remaining in the object.CommentShareEXPERTRiku_Kobayashianswered 14 days ago0Amazon Simple Storage Service (S3) isn't a service that you can "delete" in the way that you might delete a file or an object. It's a web service offered by Amazon Web Services (AWS) that provides storage through web services interfaces. When you talk about "deleting" S3, what you're likely referring to is deleting all the S3 resources that you've created, so you aren't charged for them.Delete all S3 objects: Go to the S3 console, navigate to each bucket, and delete all objects within the bucket. You can do this by selecting the bucket, selecting the objects inside, and choosing "Delete". Please note that this action is irreversible and you will lose all data stored in these objects.Delete all S3 buckets: After deleting all objects, you can delete the buckets themselves. From the S3 console, select the bucket and choose "Delete". You will be asked to confirm the deletion. Again, this action is irreversible, and you will lose all configuration associated with the bucket.Remove any associated resources: If you're using any other resources associated with S3, such as S3 events in AWS Lambda, you should ensure these are removed as well. If you have lifecycle policies, replication rules, or any other bucket configuration, make sure to delete them.Check for Transfer Acceleration: If you have enabled S3 Transfer Acceleration on any bucket, you need to disable it to avoid charges.Check for S3 versions: If you've enabled versioning on your S3 buckets, you might have multiple versions of an object, all of which will be charged for storage. Make sure to delete all versions of an object.Check for Cross-Region Replication: If you've set up Cross-Region Replication (CRR), you may have replica copies of your objects in another AWS region. Check your replication settings and ensure that you delete any replicated objects and buckets in other regions.CommentShareEXPERTsedat_salmananswered 14 days agoEXPERTalatechreviewed 14 days ago"
Lightsail wordpress image under attack... help........... I am new to AWS so for one of my learning sessions I moved 10 wordpress sites from Godaddy to AWS Lightsail Wordpress Instances... Within 24 hours I seen my /wp-admin/ under attack.. some of the sites or about 4 years old and never had one attack now I dont know why but its under attack.. 100s of IP address are trying to login each night to the /wp-admin/Can someone please help me on how I can address this fast? I have a plugin that is blocking them one at a time but its it anything I can do on a AWS side?FollowComment
Lightsail wordpress image under attack... help.......
https://repost.aws/questions/QUOam5cBGsQUqcxWDyyoqyVQ/lightsail-wordpress-image-under-attack-help
false
"1Regrettably, Lightsail does not support AWS WAF. However, you could create a Cloudfront distribution per WordPress site, attach AWS WAF to each distribution, and point the origin in Cloudfront to the Lightsail ALB. You'd need to adjust your DNS as well. This is far from the ideal, of course.CommentShareC Mencarellianswered 5 months agoJohnathanSmith 5 months agoDo you know any good videos in it? I am so new to this.ShareC Mencarelli 5 months agoNot sure about videos, but the documentation is probably going to be one of the better sources I think: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-create-delete.htmlYou can also engage professionals on AWS IQ or various other consultants if you just need some assistance getting started.Share"
"HelloI have been implementing what should be a fairly straight-forward integration with SNS to send mobile push notifications.However, I have found 2 different sources from the docs ^1 and and the blog ^2 that clearly state I have ignored some corner cases.It may be tempting to just call CreatePlatformEndpoint every time at app startup and call it good. In practice this method doesn’t give a working endpoint in some corner cases, such as when an app was uninstalled and reinstalled on the same device and the endpoint for it already exists but is disabled.Looking at the provided examples, AWS recommend storing the platform endpoint ARN bound to the current device, but as I understand it, this cannot be stored in the device itself, since a malicious user may alter it and thus be granted access to another user's platform endpoint.Nor could I store it in Dynamo (or another database), indexed by the device token, which could be changed by FCM.What would be the recommended way to store the platform endpoint? Should I encrypt it with KMS and still store it on the end user device? Or maybe storing it in plain text is not as bad as I believe? Or did I misunderstand something about the way the device token is updated?FollowComment"
Where to store SNS Platform Endpoint ARN when registering devices?
https://repost.aws/questions/QUHWTTxLovRc2hj53xV3PI7A/where-to-store-sns-platform-endpoint-arn-when-registering-devices
false
"-1Storing the platform endpoint ARN on the end user device is not recommended due to the potential security risks you mentioned. Encrypting the ARN with KMS and storing it on the device would add an extra layer of security, but it may still be vulnerable to attacks if the encryption key is compromised.A better solution would be to store the platform endpoint ARN in a secure backend system like DynamoDB, indexed by a unique identifier for the user, such as a user ID. This way, the platform endpoint ARN can be retrieved and used to send push notifications without the need to store it on the end user device.To handle the corner case where the app is uninstalled and reinstalled on the same device, you can use the SNS feature called "event feedback". When an app is uninstalled, SNS will receive an event feedback message from the platform (such as FCM or APNS) indicating that the endpoint is no longer valid. You can use this message to disable the endpoint and delete it from your backend system, and create a new endpoint when the app is reinstalled.CommentSharemishdaneanswered 2 months agoGiorgio Azzinnaro 2 months agoI believe this answer was generated through GPT. Pasting my question into GPT produces this reply with very little difference. In addition, this user produced 13 replies in less than 1 hour yesterday. An average 4 minutes per answer. All of them in a very remarkable form that reminds me of ChatGPT.While I do see the value of such a tool - and in fact - I used it myself to do my research before coming to re:Post - I created this question hoping to get feedback from either experienced AWS users, who have first-hand experience with SNS, or maybe AWS engineers who worked on the service.Share"
"Hello,Getting up to speed on Lambda limits and reading through the docs and faqs I have found conflicting retry behaviour for asynchronous invocation:"Lambda functions being invoked asynchronously are retried at least 3 times."source: https://aws.amazon.com/lambda/faqs/"Asynchronous invocation: If your Lambda function is invoked asynchronously and is throttled, AWS Lambda automatically retries the throttled event for up to six hours, with delays between retries."source: https://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html"If AWS Lambda is unable to fully process the event, it will automatically retry the invocation twice, with delays between retries"source: https://docs.aws.amazon.com/lambda/latest/dg/retries-on-errors.htmlI understand some of the docs may be specific to throttling and the retry policy for it may be different but it's unclear what's what.Does anyone know which one is right? I was mostly interested in throttling retry behaviour (max concurrency limit reached)ThanksFollowComment"
Async Lambda retries conflicting documentation
https://repost.aws/questions/QUZO3t3bWBTuqbcj3OyNsrwA/async-lambda-retries-conflicting-documentation
true
"0Accepted AnswerHi Robert,All three documents you provided are correct though somewhat confusing. We categorize Lambda processing errors into system errors such as throttling and customer errors such as invalid runtime. System errors are retried for six hours with delays between retries; customer errors are retried twice. Hope this clarifies.ThanksJiaCommentShareAWS-User-9034595answered 4 years ago"
"Hello,I have an AWS account and I need to create another instance in the same account. I am looking everywhere to see if I can hence get separate invoices for each instance.The only answers available seem to be: have different AWS accounts. Am I missing something?ThanksFollowComment"
Billing Instances Separately
https://repost.aws/questions/QUZRHsrvMWRzedoHH_HW5jMA/billing-instances-separately
false
"0Hi, @KarveshYou can't.(However, it may be supported depending on the billing agency vendor.)As an alternative, you can use the cost allocation tag to keep track of the usage charges for each instance.https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.htmlIf you want to separate it as an invoice, you need to separate the account.CommentShareEXPERTiwasaanswered a year ago"
"Hi,a customer is asking if DMS could be a viable option for real-time replica of an on-prem Oracle to RDS. The questions are:what is the minimum achievable latency (besides network latency between on-prem and AWS)?How does the DMS polling process work? Is there a defined sleep interval?ThanksFollowComment"
DMS latency
https://repost.aws/questions/QUbMwMl-FgROCa0Hfyemf-Ww/dms-latency
true
"0Accepted AnswerTo answer your question simply, DMS is not a real-time replication engine. If you look at the settings for change management in DMS, without BatchApply turned on, DMS will collect transactions from the source database every second. It will attempt to collect the minimum number of transactions for a 1 second period before it applies those transactions to the target database.https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.ChangeProcessingTuning.htmlIt is also possible for DMS to gather changes from the source DB faster than it can write them to the target DB. It is also possible for these "buffered" transactions to fill the DMS instance memory therefore being written to disk. In these cases, latency will be impacted.There are many options for tuning DMS based on the customer requirements, so if we look at the specific requirements for the customer's use case, then it might be viable option. However, simply asking for "real-time" replication is not enough to conclude that DMS is or isn't the right solution for them.Minimum achievable latency depends on the use case and the DMS settings the customer is using for the use case.Task Settings for DMSI hope this helps!CommentShareAWS-User-0259922answered 4 years ago"
"We need to allow Nimble Studio users to download files from nice DCV sessions, to their local machines.There is general information on Nice DCV here: https://docs.aws.amazon.com/dcv/latest/adminguide/security-authorization.html...however, in the context of Nimble Studio, when I am updating a Launch Profile's AMI, and I edit the C:\Program Files\NICE\DCV\Server\conf\default.perm file, and push that AMI update all the way through to a Lanuch Profile to boot up with that edited permissions default, it doesnt propogate through to the end users.The default.perm file, when logged in as Administrator during the AMI update, had this added/edited to it (everything else remained commeted out):[permissions]%any% allow file-download... when an End User boots the Launch Profile, with the updated AMI, their C:\Program Files\NICE\DCV\Server\conf\default.perm file has the following (!?):[permissions]%any% disallow builtin%any% allow display audio-out pointer%owner% allow builtin%any% deny file-downloadMy users cant download the work they are producing in the cloud, looking forward to resolving this! :DFollowComment"
DCV file download permissions within Nimble Studio
https://repost.aws/questions/QUAO1EiUu_TTe1aXXQ8w1Q4Q/dcv-file-download-permissions-within-nimble-studio
false
"0Hey there! On the permission side, if you would like to allow downloads for the owner and not for collaborators, you can remove the last line of the second config. Note that explicit denies cannot have an override and disallows will need to have a subsequent allow permission to override.[permissions]%any% disallow builtin #Dont allow any feature to anyone (can be overridden)%any% allow display audio-out pointer #Allow everyone to display stream, audio out, and pointer control%owner% allow builtin #Allow owner of session to have all features This config means everyone can collaborate, but they are limited to pointer control and audio out. The owner will have all features including download.I suspect that Nimble Studios is overwriting your configuration. Have you tried updating the default.perm with a custom configuration?CommentShareAndrew_Manswered 4 months ago"
"Hello,The following Java code is meant to get the full name of the first found S3 bucket which name starts with "mys3" and, if none exists, then to create a new one and return its name....AmazonS3 amazonS3 = AmazonS3ClientBuilder.standard().build();...this.bucketName = amazonS3.listBuckets().stream().filter(b -> b.getName().startsWith("mys3")).findFirst().orElse(amazonS3.createBucket("mys3" + RANDOM)).getName();...it raises the following exception:com.amazonaws.services.s3.model.AmazonS3Exception: Your previous request to create the named bucket succeeded and you already own it. (Service: Amazon S3; Status Code: 409; Error Code: BucketAlreadyOwnedByYou; Request ID: D8BV5DZVYFE5P9QA; S3 Extended Request ID: 27GbfEzSb0h3/taU5jB2/DM1vu4Te2mM5GdbgudbpkrJUlKmaDpffGXe1iaPTiGxtU4gjVzhSBU=; Proxy: null), S3 Extended Request ID: 27GbfEzSb0h3/taU5jB2DM1vu4Te2mM5GdbgudbpkrJUlKmaDpffGXe1iaPTiGxtU4gjVzhSBU=Not sure why this exception is raised as nothing seems to be wrong with the code above and googling for a while, I didn't find any pertinent solution. Then, I replaced the code above by the equivalent one, as follows:...AmazonS3 amazonS3 = AmazonS3ClientBuilder.standard().build();Optional<Bucket> optionalBucket = amazonS3.listBuckets().stream().filter(b -> b.getName().startsWith("mys3")).findFirst();if (optionalBucket.isPresent()) this.s3BucketName = optionalBucket.get().getName();else this.s3BucketName = amazonS3.createBucket("mys3" + RANDOM).getName();...This time everything works as expected and no exception is raised. Then I tried a 3rd version of the code, also equivalent to the first one:...AmazonS3 amazonS3 = AmazonS3ClientBuilder.standard().build();Optional<Bucket> optionalBucket = amazonS3.listBuckets().stream().filter(b -> b.getName().startsWith("mys3")).findFirst();this.s3BucketName = optionalBucket.isPresent() ? optionalBucket.get().getName() : amazonS3.createBucket("mys3" + RANDOM).getName();...It raised the same exception as previously. So, am I right to say that the AWS Java SDK behave inconsistently as it works differently for different version of equivalent Java code ?FollowComment"
Apparent inconsistency when using the Java SDK to handle S3 buckets
https://repost.aws/questions/QU92mlI2lATD-SEZpoqQgdqw/apparent-inconsistency-when-using-the-java-sdk-to-handle-s3-buckets
false
0Anyone could help please ?CommentShareNicolasanswered a month ago
"Hello,We got an issue when we tried to add a node group in EKS with t4g class instanceAutoScalingGroupInvalidConfiguration - Amazon EC2 Autoscaling does not support the requested instanceType t4g.medium.I check and t4g are available in my region (eu-west-3)I successfully set a t4g class instance by manually edition a launch configuration in an other node group.Maybe you have forgot a condition somewhere when you release this new type of instance ?FollowComment"
EKS issue when adding node group with t4g class instance
https://repost.aws/questions/QUfyli6u0vSEK7Pn47mZNBQg/eks-issue-when-adding-node-group-with-t4g-class-instance
false
"0Is there any difference between working node group and not-working node group? You should check which AZs are configured to each node group.Some instance types is not supported in the recently launched AZ.CommentShareEXPERTposquit0answered a year ago0node group are exactly the same (I test it with terraform and manually)node group can deploy instance in the 3 AZ (eu-west-3a, b, c)I also check t4g instance are available in EC2 etc, i can set it manually by editing the launch configuration of the ASG created by EKS (if I first ask m6g instance for exemple)As I can do it manually by cheating EKS, i suppose it's only a software limitation / bugCommentShareAWS-User-0367151answered a year ago0problem seems to have been resolve by aws teamCommentShareAWS-User-0367151answered a year ago"
"I've hosted a simple PHP website, with a few pages. I noticed that it doesn't work/load for most visitors - Although sometimes it opens. Any ideas to make it 100% Working?*My CPU Burst Capacity is 100%, check the instance metrics here: https://prnt.sc/EozSKE8Bs1ru*I'm using Cloudflare and there are no blocking rules there.Website URL: https://iphoneimeicheck.infoFollowComment"
My LightSail Website not working properly
https://repost.aws/questions/QUtSrlnBveTfaIf0fkYizL0Q/my-lightsail-website-not-working-properly
false
"0If it's a new website with a new domain name, it may take several hours to a few days for the DNS records to be propagated.CommentSharekdambiecanswered 7 months agoYahya Elharony 7 months agoThanks Kdambiec. The domain is not new, I built it since +1yr but I moved recently to AWS for better scalability!Share"
"Hi there, hope all of you are fine.I am trying to configure Alarm for stopping EC2 when it is idle. I have tried CPU threshold with average, sum, with different time periods, but none is working. Maybe, it is continuing the previous state, before alarm was updated, but this should not be the behavior, it should check if the Instance is stopped or not..Kindly help me with this, thanks.FollowComment"
CloudWatch Alarm behaving weird for stopping EC2
https://repost.aws/questions/QUTVXpwicWQby07-6uchXOeQ/cloudwatch-alarm-behaving-weird-for-stopping-ec2
false
"0Assuming this is the use case: To stop the EC2 instance when the CPUUtilization is less than X%.Depending on whether or not detailed monitoring is enabled for the EC2 instance, the period for your alarm must be selected accordingly. Enable or turn off detailed monitoring for your instances. In case of detailed monitoring the metrics is populated with datapoints every minute.Without detailed monitoring enabled, selecting a period of 5-min 'or' above should give the alarm enough datapoints for evaluation i.e. regardless of "Datapoints to alarm" value - the alarm will not have missing datapoints as long as the metric is populated every 5 minutes.Regarding statistics you choose for the alarm - I'd recommend to use 'Average' but any other statistic is valid/appropriate for this metric.If the alarm is continuing to maintain its state when there is missing data check for the configuration of Missing data treatment, whether it is set to ignore. Change it to either breaching, notBreaching or missing as per your use case. Read more about missing data here.If you are new to CloudWatch alarms, see if this documentation helps in creating an alarm to stop EC2 instance using CPUUtilization metric, read this: Add stop actions to CloudWatch alarms.CommentShareSUPPORT ENGINEERShreyas_Manswered 9 months ago"
"Hi,We want to sell model packages on the AWS Marketplace and saw we could provide model artifacts on S3 along with the image.Our question:Can we encrypt the model artifacts without giving the subscribers additional permissions to our KMS?(Asking because it's not specified in the documentation - it says you can encrypt your model artifacts in general using KMS, but what happens for a marketplace offering?)Thank you!FollowComment"
Encrypting SageMaker Model Artifacts for the AWS Marketplace Offering
https://repost.aws/questions/QUapY5aChhT1iBeaPvW3-yMQ/encrypting-sagemaker-model-artifacts-for-the-aws-marketplace-offering
false
"hello,i installed Docker-Compose mailserver on EC2, now i cant connect to that EBS, once i Attach another EBS it works, as i checked AWS will block EC2 with their firewall! how can i connect to my old EC2+EBS again?thanksFollowComment"
port 22: Connection timed out
https://repost.aws/questions/QUXx5hFWBHR-qF1p_AehS27w/port-22-connection-timed-out
false
"0Hello,Generally for connection timed out issues, following are the reasons -The instance's IP address or hostname is correct.The instance is passing its health checks.The security group of the instance allows incoming traffic on TCP port 22.The network ACLs of instance subnet allows incoming traffic on TCP port 22 and allow ephemeral port for the outgoing traffic.The route table of the instance’s subnet is configured properly to provide connectivity between EC2 instance and the SSH client.vThere isn't a firewall blocking the connection between SSH client and the EC2 instance.SSH isn't blocked by TCP Wrappers in the instance.Additionally, I do want to point out that AWS blocks outbound traffic on port 25 (SMTP) of all EC2 instances and Lambda functions by default.https://aws.amazon.com/premiumsupport/knowledge-center/ec2-port-25-throttle/According to your specific usecase, I installed docker-compose and installed Mailu afterwards to recreate the issue. However, I currently didn't run into any SSH errors even after installing the mailserver on the Amazon Linux 2.In case if you are still facing this issue, we will need to troubleshoot based on your configurations. Could you please create a support case with our Premium Support team directly so we may discuss details on your resource configurations?Please do not post any sensitive information (such as account or it's resources details) over re:Post since this is a public platform.CommentShareSUPPORT ENGINEERYash_Canswered 4 months ago"
"What do I need to change on my nginx/sites-available/default config so that it can find /api/auth? I am new to nginx. I believe the issue is in the nginx configuration for proxying requests to the api. here is my /nginx/sites-available/default config: server { listen 80 default_server; server_name _; # react app & front-end files location / { root /opt/devParty/client/build; try_files $uri /index.html; } # node api reverse proxy location /api { root /opt/devParty/routes/api; try_files $uri /api/auth.js =404; add_header 'Access-ControlAllow-Origin' '*'; proxy_pass http://localhost:4000/; } }Here is the file structure on EC2 ubuntu:devParty/├── client│ ├── package-lock.json│ ├── package.json│ └── webpack.config.js├── config│ ├── db.js│ └── default.json├── middleware│ └── auth.js├── models│ ├── Post.js│ ├── Profile.js│ └── User.js├── package-lock.json├── package.json├── routes│ └── api└── server.jsAnd my server.js file:const { application } = require('express')const express = require('express')const app = express()const PORT = process.env.PORT || 4000const connectDB = require('./config/db')const path = require('path')// Connect DatabaseconnectDB()// Init Midddleware// Allows us to get data in req.body on users.jsapp.use(express.json({ extended: false }))// app.get('/', (req, res) => res.send('API Running'))// Define Routesapp.use('/api/users', require('./routes/api/users'))app.use('/api/auth', require('./routes/api/auth'))app.use('/api/profile', require('./routes/api/profile'))app.use('/api/posts', require('./routes/api/posts'))// Server status assets in productionif(process.env.NODE_ENV === 'production') { // Set static foler app.use(express.static('client/build')) app.get('*', (req, res) => { res.sendFile(path.resolve(__dirname, 'client', 'build', 'index.html')) })}app.listen(PORT, () => console.log(`Server started on port ${PORT}`))FollowComment"
/api/auth 404 (Not Found) nginx ec2
https://repost.aws/questions/QU6IZUM_t7RnmMPstJv1izhA/api-auth-404-not-found-nginx-ec2
false
"0It seems like the error is occurring because your Nginx configuration is not correctly proxying requests to your API server. Here's what you can try to fix the issue:Verify that your API server is running and accessible on localhost:4000. You can do this by running curl http://localhost:4000/api/auth on your EC2 instance's terminal.In your Nginx configuration, replace root /opt/devParty/routes/api; with proxy_pass http://localhost:4000; to proxy all requests under /api to your API server running on port 4000. Also, remove try_files $uri /api/auth.js =404; from the location /api block.Your updated Nginx configuration should look like this:server { listen 80 default_server; server_name _; # react app & front-end files location / { root /opt/devParty/client/build; try_files $uri /index.html; } # node api reverse proxy location /api { add_header 'Access-Control-Allow-Origin' '*'; proxy_pass http://localhost:4000; }}Save the updated configuration and restart Nginx using the command sudo service nginx restart.Access your API server by visiting http://<your-ec2-instance-public-ip>/api/auth in your browser.CommentSharehashanswered a month agodbms_dd a month agoThanks for the feedback! I made these changes but the console now shows error:TypeError: a.map is not a functionat Profiles.js:22:30and front-end does not load. instead now there is a blank grey screen. Do you think it's the use of require in server.js:app.use('/api/auth', require('./routes/api/auth'))along with the file structure being/routes/api/autheven though in nginx config the location islocation /api?here is a link to the github repo for my project:https://github.com/codereyes-1/DevSocialShare"
"I was able to create a Route 53 domain, create each user email accounts (25), and a couple of Groups. However, where I am really stuck, is how to use them most effectively. I want to create a Group Calendar that all employees use to put locations, off day requests, and general meeting information. I would also like to have view only access for the President's and CEO calendars, so that we have visibility of where they are and their personal secretaries can make changes and edits to the calendar that employees with view only access can see upcoming changes.I need some assistance understanding if I am setting it all up from a newbie admin stand point, and then how to execute/manage it better. Not too much information on any forums that are helpful with advanced workmail/calendars.DFollowComment"
Workmail and Calendar questions
https://repost.aws/questions/QUCB7HjVAtSCC0LXy8v6dMqg/workmail-and-calendar-questions
false
"0Hi,I hope I can help out here and give you some pointers on solving yous challenges. First I would suggest to create a group with all you users so you can use this to grant permissions.For the shared calendar I would create a shared mailbox (resource) and disable the auto meeting responses and add 1 or 2 users as delegate to this mailbox. This will allow you users to send meetings to this shared mailbox and the delegates can manage them. If you want your users to directly add meetings, one of the delegates can add read/write permissions (for the group that was created earlier) to the calendar of the shared mailbox and allow all users to open the calendar and add their appointments.For the calendars of the CEO and President everyone by default can already see "free busy" information. You can control the level of that via the calendar permissions. This will allow users to see availability when booking meetings. Some clients will be able to open the calendar with these permissions (Windows Outlook). Other clients will need read permissions at a minimum to open the calendar as shared calendar. For their personal secretaries I would add them as delegate on the mailbox so they can manage their mailboxes on their behalf.Here is a documentation page that might be helpful to learn more about permissions and delegation: AWS DocsKind regards,RobinCommentShareMODERATORrobinkawsanswered a year ago"
"I had to make an account and this is the best place I could find where I could file a case. If there is some e-mail or a different sector you can directly forward me to, please inform where I should go.I speak in name of the Brazilian ISP DataCorpore, AS28271, we have thousands of IP addresses and thousands of clients and right now many of our prefixes have been incorrectly listed at AWS' firewall WAF as a proxy provider. While we do have an IDC and it's possible for some of our clients to host such services, these span only a small portion of our prefixes.Right now many of the prefixes being blocked by AWS firewall are in fact for broadband and commercial internet use and because of such a large portion of websites being hosted in AWS we have hundreds of clients that cannot access many websites due to denied access (error 403).We request to have these listing on our prefixes (whois AS28271) removed from the WAF anonymous IP list or at least have it readjusted to correctly reflect our prefixes used for such means.I can see that a similar question has been asked before on the forums here but it was largely misunderstood, instructing the user to contact their ISP to somehow get it fixed when it is completely out of their ISP's control, an answer that really hits home for us since that's exactly what our own clients have been doing and we are left powerless to solve it. We are not an AWS client and we would like to get this resolved directly with AWS.FollowComment"
Out of recourse to contact WAF team
https://repost.aws/questions/QUaGh7MJ9_TCSerTcGG20Nlg/out-of-recourse-to-contact-waf-team
false
"Could you share AWS Services list, per IaaS, PaaS or SaaS modele.g. RDS is PaaSFollowComment"
"AWS Services list by IaaS, PaaS or SaaS model"
https://repost.aws/questions/QU4CWd3oc4TXKmmSkiFqpZZA/aws-services-list-by-iaas-paas-or-saas-model
false
"0I think the short answer is that it straddles all ends, has a lot in the middle and goes beyond the prototypical generic PaaS, IaaS or SaaS.I'd like to think it about it this way and you can read more on - AWS ProductsCommentShareNikoanswered 3 months ago"
"GoalI'm trying to run a service using a task definition which is a flask server image for backend.ProblemI keep getting this error message - There was an error deploying the backend-flask container. Resource handler returned message: "Error occurred during operation 'ECS Deployment Circuit Breaker was triggered'." (RequestToken: ..., HandlerErrorCode: GeneralServiceException)I triedRemoved the cluster and service. Create a cluster, then create a service using the task definition (provided below) from scratch all over again.Attached the AmazonEC2ContainerRegistryReadOnly policy to both Task role and Execution role to make sure that the service can access the ECR for the flask image.Updated the flask Dockerfile (from EXPOSE ${PORT} to EXPOSE 4567)Checked the CloudFormation log but the error message is too general, unable to pinpoint where is wrong (please see the attached image).Can someone please help me:how to reason/identify the cause based on the clues I suggested here.where to look and what to fix.I appreciate your taking time for my trouble. Thank you.CloudFormation logTask definition: backend-flask.json{ "family": "backend-flask", "executionRoleArn": "arn:aws:iam::<ACCOUNT_ID>:role/appServiceExecutionRole", "taskRoleArn": "arn:aws:iam::<ACCOUNT_ID>:role/appTaskRole", "networkMode": "awsvpc", "cpu": "256", "memory": "512", "requiresCompatibilities": [ "FARGATE" ], "containerDefinitions": [ { "name": "backend-flask", "image": "<ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/backend-flask", "essential": true, "healthCheck": { "command": [ "CMD-SHELL", "python /backend-flask/bin/flask/health-check" ], "interval": 30, "timeout": 5, "retries": 3, "startPeriod": 60 }, "portMappings": [ { "name": "backend-flask", "containerPort": 4567, "protocol": "tcp", "appProtocol": "http" } ], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "cruddur-week6-ecr", "awslogs-region": "us-east-1", "awslogs-stream-prefix": "backend-flask" } }, "environment": [ ... ], ] } ] }FollowComment"
Resource handler returned message: ECS Deployment Circuit Breaker was triggered (HandlerErrorCode: GeneralServiceException)
https://repost.aws/questions/QU5nk1tyN_TpWbKm85LdWGgA/resource-handler-returned-message-ecs-deployment-circuit-breaker-was-triggered-handlererrorcode-generalserviceexception
true
"1Accepted AnswerHi Gwen,is the app healthy? Typically the circuit breaker is triggered when the app does not bootstrap correctly and is deemed as unhealthy.Start looking into this: https://repost.aws/knowledge-center/ecs-task-container-health-check-failuresHope it helpsCommentShareEXPERTalatechanswered 17 days agoGwen Leigh 17 days agoHi @alatech, thank you so much for your help.I resolved the issue greatly owing to your lead.To benefit other members in the future, here's how I solved it:Tested the docker file locally.Turns out, I provided NONE of the env variables in the Dockerfile which are used across my backend app.I hard-coded the env variables in the Dockerfile.Uploaded the Dockerfile WITH env vars to ECR.Tasks are now running on ECS fine and pretty, no problem!Thank youuu :DShare"
"Hello,I have set up a few months ago some mail servers with verified identities, increased quotas and I was out of the sandbox and everything worked great until a few days ago when I noticed that the emails are no longer sent.Reputation metrics are bellow warning levels and everything is green and active (I see "Status Healthy" in the dashboard), didn't get any notifications about the service being in verification or anything negative. All looks perfect, except emails are not sent + if I check the service's status in cloudshell withaws ses get-account-sending-enabled --region eu-central-1I get:{"Enabled": false}I tried aws ses update-account-sending-enabled --enabled --region eu-central-1 , but nothing changes.I also tried sending a "Test email" from https://eu-central-1.console.aws.amazon.com/ses/home?region=eu-central-1#/verified-identities and I get:"The message can't be sent because the account's ability to send email is currently paused.Sending is paused for this account."The support guys said only "I have confirmed that your account is active and able to send email in the EU Frankfurt region." and sent me here to find a solution :(FollowComment"
"SES account is healthy, but I still can't send emails"
https://repost.aws/questions/QUIZlEZpbQQwOiQBLRItsblw/ses-account-is-healthy-but-i-still-can-t-send-emails
false
"0It seems like your AWS SES account's sending is currently paused. There could be multiple reasons for this, such as a high bounce or complaint rate, or some other policy violation. To resolve this issue, I suggest you take the following steps:Check your email sending statistics and verify that there are no issues that could trigger a sending pause. You can check your sending statistics in the AWS Management Console by navigating to the SES dashboard and selecting the "Sending Statistics" option.Check your email content and make sure that it is compliant with AWS SES policies. Make sure that your emails do not contain any spammy or phishing content.Check your bounce and complaint rates. If your bounce or complaint rate is too high, then AWS SES may pause your sending. You can check your bounce and complaint rates in the AWS Management Console by navigating to the SES dashboard and selecting the "Sending Statistics" option.Contact AWS Support to ask for more details about why your sending is paused. AWS Support can help you identify the cause of the issue and provide guidance on how to resolve it.Once you have identified and resolved the issue, you can request that AWS SES resume your sending by submitting a case to AWS Support.I hope these steps help you resolve the issue and resume your email sending.CommentSharemishdaneanswered 2 months agor3ticul 2 months agoYou haven't read my post, have you?:)I don't send spam content. The bounce and complaint rates are way bellow the warning lines. I already contacted the aws support and they say that the service is healthy "I have confirmed that your account is active and able to send email in the EU Frankfurt region." yet i can't send emails. I must say that the amazon support service is crap. :(Share"
"Hi,I need to monitor an URL in a way that if the website doesn't answer properly, I want to receive an alert.I just created a canary on CloudWatch, but my application isn't open to the world.How can I create a Security Grupo rule to allow monitoring from CloudWatch canary?ThanksFollowComment"
Security Group to allow canary monitoring
https://repost.aws/questions/QUvFMOdWTdQPSg7H2KK6ALcw/security-group-to-allow-canary-monitoring
false
"2OK, I've got the answer.I'll have to create a VPC endpoint and configure CloudWatch through the AWS PrivateLinkCommentShareronanlucioanswered 3 years ago"
"I am a brand new mturk worker. As soon as I started the completion of registering for it, it first said my cell phone was already registered with Amazon. I went back and tried this again (by this time, I tried this numerous times) and then it said my email/password login credentials were incorrect. Who should I talk to about this?FollowComment"
Brand new mturk worker is getting the runaround from Amazon
https://repost.aws/questions/QUrQIK0UmUT0y--iKdXfwPjw/brand-new-mturk-worker-is-getting-the-runaround-from-amazon
false
"0Can't log in with my registered account on amazon.com?In other words, you would be able to log in with the account with which you normally shop at Amazon.CommentShareEXPERTRiku_Kobayashianswered a month agoMrsAlvidrez a month agoYes that is correct. What should I do?ShareRiku_Kobayashi EXPERTa month agoIf the email address on the account is correct, try changing the password.Share"
"I'm working on deploying to LakeFormation via Terraform. Specifically, granting data location access to a lambda role. I'm getting an error when the role/user I'm deploying with in Terraform isn't an admin on LakeFormation (I haven't tried playing around w/ granular policies on the caller yet). Has anyone come across the same issue and what was the resolution? The caller is a service user which is used by other groups across the org, so I would ideally like to avoid elevating any more of its permissions.Configuration :resource "aws_lakeformation_permissions" "datalake-permissions" { principal = aws_iam_role.lambda-role.arn permissions = ["DATA_LOCATION_ACCESS"] data_location { arn = data.aws_s3_bucket.datalake-bucket.arn }}This is the error :error creating Lake Formation Permissions (input: { Permissions: ["DATA_LOCATION_ACCESS"], Principal: { DataLakePrincipalIdentifier: "arn:aws:iam::{account_id}:role/lambda_role" }, Resource: { DataLocation: { ResourceArn: "arn:aws:s3:::{my-bucket}" } } }): AccessDeniedException: Resource does not exist or requester is not authorized to access requested permissions.Also made sure the bucket exists and isn't an issue.FollowComment"
LakeFormation deployment with Terraform
https://repost.aws/questions/QU6WtIBGIOSFCoddSTp4IkoQ/lakeformation-deployment-with-terraform
false
"1Hello,I see you’re getting AccessDeniedException when you’re trying to create a resource of “aws_lakeformation_permissions” using Terraform script. It seems the IAM role/user which is used to create this resource doesn’t have the required permissions to create the Lake Formation Permissions.As you might know that all principals, including the data lake administrator, need the following AWS Identity and Access Management (IAM) permissions to grant or revoke AWS Lake Formation Data Catalog permissions or data location permissions with the Lake Formation API or the AWS CLI:————-> lakeformation:GrantPermissions-> lakeformation:BatchGrantPermissions-> lakeformation:RevokePermissions-> lakeformation:BatchRevokePermissions-> glue:GetTable or glue:GetDatabase for a table or database that you're granting permissions on with the named resource method————You can find more details on the documentation: https://docs.aws.amazon.com/lake-formation/latest/dg/required-permissions-for-grant.htmlI would suggest you to try giving the above permissions mentioned in the documentation to the role/user which is being used by the Terraform script to create the resources.If you still get the error, then I would suggest you to open a support case with AWS for further troubleshooting. You can use the following link for the same: https://support.console.aws.amazon.com/support/home#/case/createCommentShareSUPPORT ENGINEERJaykumar_Danswered 6 months agoEXPERTFabrizio@AWSreviewed 6 months ago"
"Hi all,I have setup EKSA loca cluster as per https://anywhere.eks.amazonaws.com/docs/getting-started/local-environment/ which is working fine.I have further setup kube-vip LB as per https://anywhere.eks.amazonaws.com/docs/tasks/workload/loadbalance/kubevip/arp/I am not able to connect using the service's EXTERNAL-IP. Able to connect using that IP from inside the pod though - screenschot here https://paste.pics/FW5HDFollowComment"
EKSA kind kube-vip service not accessible
https://repost.aws/questions/QUjkWkMt0WQYydJNqQmIPpcA/eksa-kind-kube-vip-service-not-accessible
false
"I am struggling to get EC2 instances deployed via an ASG joined to the domain.I get the following error each timeNew-SSMAssociation : Document schema version, 2.2, is not supported by association that is created with instance idI have tried various schema versions detailed Here however all fail with the same errorSSMdoc.tfresource "aws_ssm_document" "ad-join-domain" { name = "ad-join-domain" document_type = "Command" content = jsonencode( { "schemaVersion" = "2.2" "description" = "aws:domainJoin" "parameters" : { "directoryId" : { "description" : "(Required) The ID of the directory.", "type" : "String" }, "directoryName" : { "description" : "(Required) The name of the domain.", "type" : "String" }, "dnsIpAddresses" : { "description" : "(Required) The IP addresses of the DNS servers for your directory.", "type" : "StringList" }, }, "mainSteps" = [ { "action" = "aws:domainJoin", "name" = "domainJoin", "inputs" = { "directoryId" : data.aws_directory_service_directory.adgems.id, "directoryName" : data.aws_directory_service_directory.adgems.name, "dnsIpAddresses" : [data.aws_directory_service_directory.adgems.dns_ip_addresses] } } ] } )}template.tfdata "template_file" "ad-join-template" { template = <<EOF <powershell> Set-DefaultAWSRegion -Region eu-west-2 Set-Variable -name instance_id -value (Invoke-Restmethod -uri http://169.254.169.254/latest/meta-data/instance-id) New-SSMAssociation -InstanceId $instance_id -Name "${aws_ssm_document.ad-join-domain.name}" </powershell> EOF}The template is then referenced in the ASG Launch Template user_data section. Getting onto the instance I can see the script/logs and have confirmed the variables set (instance id for example).Full error message from the PS running belowNew-SSMAssociation : Document schema version, 2.2, is not supported by association that is created with instance idAt C:\Windows\system32\config\systemprofile\AppData\Local\Temp\EC2Launch228430162\UserScript.ps1:3 char:5+ New-SSMAssociation -InstanceId $instance_id -Name "ad-join-domain ...+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (Amazon.PowerShe...sociationCmdlet:NewSSMAssociationCmdlet) [New-SSMAs sociation], InvalidOperationException + FullyQualifiedErrorId : Amazon.SimpleSystemsManagement.Model.InvalidDocumentException,Amazon.PowerShell.Cmdlets. SSM.NewSSMAssociationCmdletFollowComment"
aws_ssm_document addomainjoin error
https://repost.aws/questions/QUO6NUn0eaTsuObDcx9H6-hQ/aws-ssm-document-addomainjoin-error
true
"0Accepted AnswerHello,I noticed that you are using New-SSMAssociation with the parameter -InstanceId.However, documents that use schema version 2.0 or later must use -Target instead of -InstanceIdQuoting the :InstanceId has been deprecated. To specify a managed node ID for an association, use the Targets parameter. Requests that include the parameter InstanceID with Systems Manager documents (SSM documents) that use schema version 2.0 or later will fail. In addition, if you use the parameter InstanceId, you can't use the parameters AssociationName, DocumentVersion, MaxErrors, MaxConcurrency, OutputLocation, or ScheduleExpression. To use these parameters, you must use the Targets parameter.You can replace -InstanceId with:-Target Key=InstanceIds,Values=$instance_idPlease refer to the following links for more details:https://docs.aws.amazon.com/powershell/latest/reference/items/New-SSMAssociation.htmlhttps://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/SSM/TTarget.htmlCommentShareSUPPORT ENGINEERTulio_Manswered a year agoAWS-User-4488665 a year agoThank you for the assistance.Share"
"Hello,We are in need to call lambda function within our organizations AWS account from Vendor's AWS account based on SNS notification.Currently we have roles and permissions configured in such way that we can call / use vendor's AWS resources but they can't access our's and at the same time we don't want to open it to them.What should be our best path to make this communication work?Thank You,YogeshFollowComment"
Call cross account lambda from SNS
https://repost.aws/questions/QUQHWo7gOOTMWWqv4f4vxSnQ/call-cross-account-lambda-from-sns
false
"0Hi Yogesh,To my understanding you must allow partial access (limited to sns arn) from the vendor for it to work, so here my suggested steps:create the sns topic in the vendor's account and update it's resource-based policy to allow lambda of your account to subscribe to it.update the resource-based policy of the lambda to allow invocation from the vendor's account sns arn.subscribe the lambda function in the vendor's account sns topic.Check out this link for a step by step tutorial:https://www.shogan.co.uk/aws/aws-sns-to-lambda-cross-account-setup/SincerleyHeikoCommentShareHeikoMRanswered 4 months ago"
IOT ThingAttribute has value pattern that is not allowing plus(+) and space( ). Is there any way to implement it on our way in order to avoid pattern related errors?FollowComment
Why do ThingAttribute has value pattern that is not allowing plus(+) and space( )?
https://repost.aws/questions/QU5Ey6dhxWR9aFc5KYGhwu0A/why-do-thingattribute-has-value-pattern-that-is-not-allowing-plus-and-space
false
"0Hi. The allowed patterns, for both keys and values, are defined here: https://docs.aws.amazon.com/iot/latest/apireference/API_ThingAttribute.htmlYou can't use space or plus directly in Thing attributes. You could perhaps encode your attributes. Or you could perhaps store data instead in some other storage service and use that rather than Thing attributes.CommentShareEXPERTGreg_Banswered a year ago"
"I have managed to get AWS VPN Client working with openSUSE Leap 15.4 - the solution will also work for other distributions that use netconfig for updating the DNS resolver, instead of systemd-resolve. Everything is documented here.I've been searching for a way to contribute, hoping that support may be expanded to openSUSE and other distributions in the future, but have been unsuccessful thus far, hence the question.FollowComment"
How to contribute support for openSUSE to AWS VPN Client?
https://repost.aws/questions/QUWihfHFT1SSy3VFipasgPUw/how-to-contribute-support-for-opensuse-to-aws-vpn-client
false
"Large production database running stable on PostgreSQL 9.6.6/8 for a for a year. Today upgraded to 9.6.12 and in the first few hours encountered two segmentation faults causing the database to restart. Generally …LOG: Segmentation faultDETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.FATAL: Can't handle storage runtime process crashLOG: database system is shut downFATAL: the database system is in recovery modeThe same query was logged in both case (basic INNER JOINs).Is this a known issue? Any advice to bypass it?Dump/Restore is prohibitive. I'd like to get ahead of this before the work-week starts.FollowComment"
Aurora PostgreSQL 9.6.12: Segmentation fault and restart
https://repost.aws/questions/QUSsgfBVuxRvCJFU1yORJRMg/aurora-postgresql-9-6-12-segmentation-fault-and-restart
false
"0We continue to see the identical segfault/restart pattern every 6 hours or so.We're rebuilding the large indexes and trying to narrow to a reproducible case.A similar case, but on a different stack: https://github.com/postgrespro/pg_pathman/issues/193CommentShareNorthrockanswered 4 years ago0We copied the db to a test instance and can reproduce the segfault with a one-line SELECT statement.CommentShareNorthrockanswered 4 years ago0Can we grab a stack trace?CommentShareNorthrockanswered 4 years ago0Hi Northrock. I am a development manager for Aurora PostgreSQL. We are very sorry for the issue you have encountered on 9.6.12. Our engineers have identified the problem and we have prepared a patch release. This is being deployed to our production regions now and will appear as available maintenance for your cluster once it reaches your region.If you would like to send me a pm with your region and cluster name we can provide some additional options in terms of patching.CommentShareAWS-User-9442178answered 4 years ago0That's great news! Once deployed, I'll recheck our reproducible case and report back quickly.CommentShareNorthrockanswered 4 years ago0The issue appears to be fully resolved.Thanks for the special support. Much appreciated.CommentShareNorthrockanswered 4 years ago0Hello,We just upgraded from 9.6.8 to 9.6.12 last night, and we immediately started seeing crashes with a frequency of about once/hour.Is there another fix/upgrade available for the 9.6-compatible Aurora series?CommentSharedimitrisanswered 4 years ago0Updating Aurora 1.5.0 to 1.5.1 resolved our issue.SELECT AURORA_VERSION();CommentShareNorthrockanswered 4 years ago0Hello AWS,Check my thread,. We are on Aurora 10.7 and we are running into this all the time, at few times a week. It just bought down our live site again.https://forums.aws.amazon.com/thread.jspa?threadID=303997CommentShareVBKanswered 4 years ago0Roll out the AURORA update faster, AWS Support!This same problem started randomly hitting our 9.6 clusters this week and we just can only watch them restart until things clear up.Typically we start seeing random queries start taking longer than normal and some take 10 or more minutes. Normally the same query would average under 2 seconds.I've had to delete and reindex some indices due to errors like: Attempting to read past EOF of relation "base/16402/24527244". blockno=6400 nblocks=3834Aurora is on version 1.5.0 -- which is something only AWS can update.CommentSharegrutz-802answered 4 years ago"