Description
stringlengths
6
76.5k
Question
stringlengths
1
202
Link
stringlengths
53
449
Accepted
bool
2 classes
Answer
stringlengths
0
162k
"Hi, good evening. I would like to ask if there is a way to maintain the visual structure or shape of a pdf file (whether it is a text-only file or with tables) using only the 'ocr' function of the textract service? I would need to translate large quantities of documents that are not always printed well or digitized well later. I tried to do some tests and the text extraction is very precise and using 'Translate', I would be able to speed up the work a lot. So I'd like to ask if there's a way to keep the PDF a bit integrated? Or if i can do it in a second time with some functions?second question: is it possible to translate documents in PDF or Word format with the translate service?Thanks in advance for your reply.Btw, happy new year :)FollowComment"
Is it possible to maintain the shape of a pdf using textract? and translate docs with translate?
https://repost.aws/questions/QUT_d6dHzkRdWJ4IyB5xGEZQ/is-it-possible-to-maintain-the-shape-of-a-pdf-using-textract-and-translate-docs-with-translate
false
"0Hi,If you want to extract the structure of the document, the best way would be to use the AnalyzeDocument API, it will extract the different relations and structural element such as Table, Key Value pair, ...However if you want to only use the DetectText Apis, you will get the bounding box coordinate for each of the WORD or LINE detected, which you can use to reconstruct the document by placing the text in it's original position. (https://docs.aws.amazon.com/textract/latest/dg/how-it-works-document-layout.html the information is in Geometry) With this you will just have the text and no information regarding the Table structured or any other information that was previously in the document.Regarding your second question, Textract doesn't do document conversion, we are extracting text and structure information from the document, but we are not recreating a document similar to the one that you sent.I hope it helps.Happy New Year to you as well :)CommentShareCyprien_Danswered 5 months agorePost-User-4069813 5 months agoNot being a developer, it's a bit complicated for me. May I ask where you have to put the Json code? I thought there was a link where you put the pdf file to get the ocr. Thanks for the answer though :)Share"
"I am using AWS EC2 for my final year project, and I am starting to set up Python and spark to do the big data analysis.I am following this website: Jupyter Notebooks on AWS EC2 in 15 (mostly easy) stepsWhen I did step 13 and type jupyter notebook in my terminal, I cannot access the website? And this error occurs:PermissionError: [Errno 13] Permission deniedAny idea what could be causing this?FollowComment"
PermissionError: [Errno 13] Permission denied aws ec2
https://repost.aws/questions/QU9vDlQLO2RqmjTxSd8xjKeA/permissionerror-errno-13-permission-denied-aws-ec2
true
0Accepted Answerthis OS error i thinkif somewhere you were specifying path and that path have "/" infront OR you have to change the directory permission use this commandchown -R user-id:group-id /path/to/the/directorychange the permission so web server can access the directoryto check which user of your server own the ownership of web server you can try this commandps aux | grep httpd | grep -v grepCommentShareharshmanvaranswered 4 years ago
"We're an IPv6 shop using the AWS Direct Connect (Private VIF). Since API Gateway is not a dual-stack service, we need a workaround to be able to access it over the Direct Connect. We cannot use Cloudfront. Ideally, we'd like to use a Network Load Balancer (dual stack) to forward the API Gateway, but will consider any other ideas or experiences that others might have. In all instances of tutorials I've studied, it seems that the coin is flipped...in that API Gateway can contact the Network Load Balancer via execute-api endpoints. ...but we need the visa-versa. My ask here is can it be done, and if so, how?FollowComment"
Can a network load balancer front an API Gateway?
https://repost.aws/questions/QUcTBfqmPCTseTuOsUfH1Cnw/can-a-network-load-balancer-front-an-api-gateway
false
"3You can place an NLB in front of a Private API. The target group for the NLB needs to be IPs and you will need to use the IP addresses that are listed in the VPC Endpoint ENIs for the Private API.CommentShareEXPERTUrianswered a year agohezron a year agoHi Uri. Thank you for your reply. I've done just as you outlined. Private REST API (petstore) and have confirmed that I'm referencing the correct vpce within it. I have created an IP Target group with the internal IP addresses assigned to the endpoint ENIs. Once it was all wired up, it's time to test. In the web browser, when I hit the DNS for my NLB, it churns a bit then attempts to download a DMS file (Database migration?). That's progress... I'm definitely not getting through to my API tho. Am I missing something?ShareUri EXPERTa year agoYou probably need to do the TLS termination on the NLB and for that you will need to use a certificate there. Are you using it? Try it with curl -v to see what is going on.Share"
"HI,I wanted to created EC2 instance in EU-WEST-1 region in Terraform. When I use official AMI ami-0d71ea30463e0ff8d I see the below error?Error launching source instance: InvalidAMIID.NotFound: The image id '[ami-0d71ea30463e0ff8d]' does not exist.I tried different AMIs and different Regions, but error is always the same.FollowComment"
AMI not found error in Terraform
https://repost.aws/questions/QUqm3dcKUPROq71GZPHT-bHg/ami-not-found-error-in-terraform
true
"2Accepted AnswerHi thereFrom the note I understand you wanted to created EC2 instance in EU-WEST-1 region in Terraform. When you use official AMI ami-0d71ea30463e0ff8d you see the below error? Error launching source instance: InvalidAMIID.NotFound: The image id '[ami-0d71ea30463e0ff8d]' does not exist. Please correct me if my understanding is wrong.Please note this can be cause by working in different region. Check if the AMI is in the same region where you want to create an instance.This allows you to more easily specify the type of AMI you want and have Terraform automatically use that AMI, including the option to have it automatically update the AMI in use when newer AMIs matching your criteria are available. It will also make it easier to manage using the same AMI in different regions as the AMI ID is different for each region it is copied to.Referencehttps://aws.amazon.com/premiumsupport/knowledge-center/ec2-ami-error-not-public/https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-launch-copied-ami/CommentShareNonkululekoanswered a year agoEXPERTBrettski-AWSreviewed a year ago0Hi, it fixed, it was a region problem!CommentSharemyronix88answered a year ago"
"computer was formatted and s-3 browser was re-installed. But sit was not to be seen. Therefore I reloaded the web site again from our back up. But site www.miniaturelovers.com is partially down. I can see few pages but majority of pages cannot be seen. Why. ? Urgent reply awaited.FollowCommentskinsman EXPERT3 months agoHi, you need to clarify a few things to get help. What's the architecture of your website? You mention s3 but what other components? How is content split across those components? Which did you reload from backups? What does formatting your compouter and reinstalling an S3 browser have to do with whether your public website was working or not?Share"
www.miniaturelovers.com not working.
https://repost.aws/questions/QUjtk7K7LPTAu-Z66traDFCg/www-miniaturelovers-com-not-working
false
"1I saw your web pages are behind Cloudfront and maybe the pages are located in S3. If yes, you can try to setup OAC (Origin Access Control) to provide the access control from CloudFront to S3 by following links.https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.htmlhttps://aws.amazon.com/premiumsupport/knowledge-center/s3-website-cloudfront-error-403/If the issue is not caused by OAC, then you can check the troubleshooting suggestions: https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-troubleshoot-403-errors/CommentShareBard Lananswered 3 months ago"
"I have a AWS Managed AD directory service. I am not able to seamlessly join the Windows Ec2 instance to Domain. If i RDP into the instance and try to join the domain manually it works.I am also able to join domain by running the following command in running EC2 instance: AWS-JoinDirectoryServiceDomain and AWS-JoinDirectoryServiceDomainHere is the error message that i am getting:Execution Summary: XXXXXXXX-XXXX-XXXX-XXX-XXXXXXXX1 out of 1 plugin processed, 0 success, 1 failed, 0 timedout, 0 skipped. The operation aws:domainJoin failed because Domain join failed with exception: Domain Join failed exit status 1.I have already confirmed all the required ports are open. Infact i have allow everything in both SG and ACL.FollowComment"
Windows Ec2 instance seamless domain join
https://repost.aws/questions/QU69L5ne35RBSj07ufSxX6cw/windows-ec2-instance-seamless-domain-join
false
0When you launch the EC2 instance are you choosing to join the domain? If you are using the new EC2 launch wizard you will find this option at the bottom of the screen under "Advanced details" - you get to pick which domain it will join.Opening security groups is not the right path to making this work. You MUST make sure that the EC2 has an IAM instance role that has at least the following permission:arn:aws:iam::aws:policy/AmazonSSMDirectoryServiceAccessFor example here is an IAM instance role definition in CloudFormation that grants Domain join permission and also SSM managed instance permission:EC2SsmIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Statement: - Effect: Allow Principal: Service: [ec2.amazonaws.com] Action: ['sts:AssumeRole'] Path: / ManagedPolicyArns: - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore - arn:aws:iam::aws:policy/AmazonSSMDirectoryServiceAccessCommentShareAlex_AWSanswered 10 months ago
"I'm facing following issue when creating Serverless Aurora Postgres 14.5 Cluster using AWS CDK. I've also tested out different versions such as 15.2, 14.6, 14.7.Error:"The engine mode serverless you requested is currently unavailable. "self.aurora_serverless_db = rds.ServerlessCluster( self, "AuroraServerlessCluster", engine=rds.DatabaseClusterEngine.aurora_postgres(version=rds.AuroraPostgresEngineVersion.VER_14_5), vpc=self.vpc, vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_ISOLATED), default_database_name="serverless_db", backup_retention=Duration.days(1), deletion_protection=True, enable_data_api=True, parameter_group=rds.ParameterGroup.from_parameter_group_name( self, "AuroraDBParameterGroup", "default.aurora-postgresql14" ), scaling=rds.ServerlessScalingOptions( auto_pause=Duration.minutes(30), # Shutdown after minutes of inactivity to save costs min_capacity=rds.AuroraCapacityUnit.ACU_2, max_capacity=rds.AuroraCapacityUnit.ACU_4 ), )All is working well with engine = rds.DatabaseClusterEngine.AURORA_POSTGRES and parameter_group ="default.aurora-postgresql10"FollowComment"
Unavailable Serverless Db Engine Mode Error
https://repost.aws/questions/QUTCKQz8CPSnSExbt-Oi-8xQ/unavailable-serverless-db-engine-mode-error
true
"0Accepted AnswerServerlessCluster(), I think Aurora Serverless V1 is running.With Aurora Serverless V1, only Aurora PostgreSQL 10 and 11 compatible versions can be configured.https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.relnotes.htmlTo use Aurora Postgres 14.5 compatible version, you must use Aurora Serverless V2.I think you have to force it to Aurora Serverless V2 with rds.DatabaseCluster() as described in the Issues here.https://github.com/aws/aws-cdk/issues/20197CommentShareEXPERTRiku_Kobayashianswered 9 days ago"
"Hello,I created a new packaging instance to build a new version of my application that I want to deploy (as I always did). I used the Amazon WorkSpaces Application Manager Studio, chose my application and clicked "Update". After my changes I used "Upload". After uploading the status of my application version on "Dashboard" becomes "Upload complete". Then I started my EC2 validation instance and connected to it. In the Amazon WorkSpaces Application Manager I went to "Pending Apps" and clicked "Refresh" but my new version didn't appear. I waited now for more than 1,5 hours and it's still not visible.In my packaging instance I checked the log files but the last steps seemed to be successful:2022-05-30 07.06.51.944 Info: Writing final package data...2022-05-30 07.06.51.944 Info: Packaging application...2022-05-30 07.06.52.382 Info: Writing project file "C:\Users\ADMINI~1\AppData\Local\Temp\2\EBC567BD-A637-4603-AF2C-31D1D7837E4A_15.stw"2022-05-30 07.06.52.382 Info: Done writing project file "C:\Users\ADMINI~1\AppData\Local\Temp\2\EBC567BD-A637-4603-AF2C-31D1D7837E4A_15.stw"2022-05-30 07.06.55.415 Info: Done patching in 4.2s2022-05-30 07.06.55.462 Info: Successfully created package.2022-05-30 07.06.55.462 Info: Successfully created package.For testing I created another new packaging instance, chose my application in the Amazon WorkSpaces Application Manager Studio and create "Update". I got the following error message:Failed to extract application from the specified appset. Appset file might be invalid.So what went wrong and what can I do to continue?Do you need further information to help me?Thanks a lot in advance,MichaelaFollowComment"
AWS Workspaces: Problem that uploaded package doesn't appear on validation instance
https://repost.aws/questions/QUMH3kkPvsRK2ty5PG2dHp3Q/aws-workspaces-problem-that-uploaded-package-doesn-t-appear-on-validation-instance
false
"0Hello Michaela! Looking at your question I believe you have done everything correctly. In WAM Studio Dashboard, the app should have been shown as "Upload Complete" and then as "Pending Testing".Please confirm that the validation instance was launched from the WAM Player AMI, in the same VPC and Public Subnet as the WAM Studio packaging instance and that it has the same role attached to it (AmazonWamAppPackaging that has the managed policy AmazonWorkSpacesApplicationManagerAdminAccess attached to it)If you still have issues I suggest to open a new support case with us and to provide the logs from WAM Studio (C:\Program Files\Amazon\WAM Studio\wamstudio.log) and WAM Player (File and Options - Log tab - choose View Log, along with the instance ID's for each instance.References:https://docs.aws.amazon.com/wam/latest/adminguide/application_packaging.html#cant_get_appshttps://docs.aws.amazon.com/wam/latest/adminguide/iam.html#app_package_rolehttps://docs.aws.amazon.com/wam/latest/adminguide/application_log_files.htmlCommentShareSUPPORT ENGINEERFrancisco_Lanswered a year agoAWS-User-3937196 a year agoHello,I can confirm that the validation instance was launched from the WAM Player AMI, in the same VPC and Public Subnet as the WAM Studio packing instance and that it has the same role AmazonWamAppPacking attached to it .But the app never change to "Pending Testing". As a workaround I created a new application (instead of creating a new version of the existing application) and this works fine.I will then open a support case.Thanks a lot for your answer!Share"
"I have a CDK process that creates a cluster with 1 t3a.medium and 3 t3a.micro instances running. The process creates those instances and registers them with the cluster. I can also click on the cluster in the AWS console and see instances under the infrastructure tab. So I know that there are instances registered with the cluster. I can also look at instances in the CLI doing:aws ecs list-container-instances --filter "attribute:ecs.instance-type == t3a.micro" --cluster Cluster0F4AFB82-9XhtVavw8Lz1{ "containerInstanceArns": [ "arn:aws:ecs:us-east-1:xxx:container-instance/Cluster0F4AFB82-9XhtVavw8Lz1/9c016f70e7b74b09a9ece93c7d84b465", "arn:aws:ecs:us-east-1:xxx:container-instance/Cluster0F4AFB82-9XhtVavw8Lz1/ffcbd6f70e7b74b09a9ece93c7d846754", "arn:aws:ecs:us-east-1:xxx:container-instance/Cluster0F4AFB82-9XhtVavw8Lz1/e4ce4a7378bf42a0bcac8a2c5463e34a" ]}Or looking at all of the instances:[cloudshell-user@ip-10-2-36-180 ~]$ aws ecs list-container-instances --cluster Cluster0F4AFB82-9XhtVavw8Lz1{ "containerInstanceArns": [ "arn:aws:ecs:us-east-1:xxx:container-instance/Cluster0F4AFB82-9XhtVavw8Lz1/60dff172513d4e609a2ad0f9f8ba6abd", "arn:aws:ecs:us-east-1:xxx:container-instance/Cluster0F4AFB82-9XhtVavw8Lz1/ffcbd6f70e7b74b09a9ece93c7d846754", "arn:aws:ecs:us-east-1:xxx:container-instance/Cluster0F4AFB82-9XhtVavw8Lz1/9c016f70e7b74b09a9ece93c7d84b465", "arn:aws:ecs:us-east-1:xxx:container-instance/Cluster0F4AFB82-9XhtVavw8Lz1/e4ce4a7378bf42a0bcac8a2c5463e34a" ]}However, when I run my tasks I get Reason: No Container Instances were found in your cluster. for the all tasks. I do have placement constraints on the tasks I'm attempting to run on the t3a.micros. But you can see above from the CLI --filter AWS returns the instances that I expect.If I run ecs-cli check-attributes I get:ecs-cli check-attributes --task-def solrNode --container-instances abe6a20b4a5543a784a4ae36cfddcb1a --cluster Cluster0F4AFB82-OaQPQblRfm8DContainer Instance Missing AttributesCluster0F4AFB82-OaQPQblRfm8D NoneHere is my cluster code: Cluster solrCluster = Cluster.Builder.create(this, appId("Solr:Cluster")) .containerInsights(true) .vpc(vpc) .capacity(AddCapacityOptions.builder() .autoScalingGroupName("solr") .vpcSubnets(getSubnetSelection(NetworkStack.STORAGE_SUBNET)) .minCapacity(1) .maxCapacity(4) .instanceType(settings.getInstanceType("solr.image.type")) // settings returns t3a.medium .machineImage(EcsOptimizedImage.amazonLinux2()) .updatePolicy(UpdatePolicy.rollingUpdate()) .keyName(settings.get("ssh.key.name")) .blockDevices(listOf(BlockDevice.builder() .deviceName("/dev/xvda") .volume(BlockDeviceVolume.ebs(settings.getInt("solr.hdd.gb"), EbsDeviceOptions.builder() .deleteOnTermination(false) .volumeType(EbsDeviceVolumeType.GP2) .encrypted(true) .build())) .build() )) .build() ) .build(); solrCluster.addCapacity(appId("Zookeeper"), AddCapacityOptions.builder() .autoScalingGroupName("zookeeper") .vpcSubnets(getSubnetSelection(NetworkStack.STORAGE_SUBNET)) .minCapacity(3) .maxCapacity(3) .instanceType(InstanceType.of(InstanceClass.T3A, InstanceSize.MICRO)) .machineImage(EcsOptimizedImage.amazonLinux2()) .updatePolicy(UpdatePolicy.rollingUpdate()) .keyName(settings.get("ssh.key.name")) .build() );My task is: ContainerDefinition container = solrTask.addContainer( "Solr:Container", ContainerDefinitionOptions.builder() .image(ContainerImage.fromRegistry("docker.io/solr:9.2")) .containerName("Solr9") .essential(true) .cpu(2048) .memoryReservationMiB(settings.getInt("solr.memory.reserve")) // returns 2048 .environment(mapOf( "SOLR_HEAP", settings.get("solr.jvm.max"), "ZK_HOST", "zoo1.solr.cloud:2181,zoo2.solr.cloud:2181,zoo3.solr.cloud:2181" )) .portMappings(listOf( portMapping(8983,8983) )) .ulimits(listOf(Ulimit.builder().name(UlimitName.NOFILE).softLimit(65000).hardLimit(65000).build())) .logging(LogDriver.awsLogs(AwsLogDriverProps.builder() .logGroup(appLogGroup) .streamPrefix("solr-" + settings.getEnv() + "-") .build())) container.addMountPoints(MountPoint.builder() .sourceVolume(efsVolumeName) .readOnly(false) .containerPath("/opt/solr/backup") .build());And for the other tasks ContainerDefinition container = pair.left.addContainer( appId("Zookeeper:Container:id"+pair.right), ContainerDefinitionOptions.builder() .image(ContainerImage.fromRegistry("docker.io/zookeeper:3.8")) .containerName("zookeeper_" + pair.right) .essential(true) .cpu(2048) .memoryReservationMiB(384) .portMappings(listOf( portMapping(2181, 2181), portMapping(2888, 2888), portMapping(3888, 3888), portMapping(8080, 8080) )) .environment(mapOf( "ZOO_MY_ID", pair.right.toString(), "ZOO_SERVERS", "server.1=zoo1.solr.cloud:2888:3888" + " server.2=zoo2.solr.cloud:2888:3888" + " server.3=zoo3.solr.cloud:2888:3888" )) .logging(LogDriver.awsLogs(AwsLogDriverProps.builder() .streamPrefix("zookeeper-" + settings.getEnv()) .logGroup(appLogGroup) .build())) .build() ); Ec2Service service = Ec2Service.Builder.create(this, "Zookeeper:Service:" + pair.right) .taskDefinition(pair.left) .vpcSubnets(getSubnetSelection(NetworkStack.STORAGE_SUBNET)) .placementConstraints(listOf(PlacementConstraint.memberOf("attribute:ecs.instance-type == t3a.micro"))) .cluster(solrCluster) .securityGroups(listOf(storageAccessGroup)) .cloudMapOptions(CloudMapOptions.builder() .name("zoo" + pair.right) .cloudMapNamespace(privateDns) .dnsRecordType(DnsRecordType.A) .container(container) .dnsTtl(Duration.seconds(10)) .build()) .build();So I clearly have instances attached to my cluster. And those instance stats match. And my placement constraints show that there are instances that match those as well for those services. So why is it not working?!FollowComment"
ECS and CDK: Trouble placing tasks on the correct nodes
https://repost.aws/questions/QUJ4GV-3l2SjuGuT_hSPghWQ/ecs-and-cdk-trouble-placing-tasks-on-the-correct-nodes
false
"0Thank you for posting the query.To answer your question, we require details that are non-public information. Please open a support case with AWS using the following linkhttps://console.aws.amazon.com/support/home#/case/createCommentShareSUPPORT ENGINEERKalyan_Manswered 17 days ago"
"I am a new AWS user. This is the first time I tried starting the CloudShell. I get the following error:Unable to start the environment. To retry, refresh the browser or restart by selecting Actions, Restart AWS CloudShell. System error: Environment was in state: CREATION_FAILED. Expected environment to be in state: RUNNING. To retry, refresh the browser or restart by selecting Actions, Restart AWS CloudShell.I tried to run CloudShell while logged in as the root user as well as a non-root user with AdministratorAccess . I tried the policy simulator and it says that the user should have access to CloudShell. I tried to run CloudShell on Firefox and Chrome. I also tried changing regions but I get the same error every time.I've seen several similar posts here and none of them seem to be resolved:https://repost.aws/questions/QUe4pWoMP2TqGDW6f4CXomvA/can-not-start-cloud-shellhttps://repost.aws/questions/QUPaqlvTw2RHi7OTrPVCzLsg/cloudshell-wont-starthttps://repost.aws/questions/QUufc-elFvSvqYlbJw95RI6Q/unable-to-start-the-environment-to-retry-refresh-the-browser-or-restart-by-selecting-actions-restart-aws-cloud-shellhttps://repost.aws/questions/QUH54A371dRvej5J1G_yZogw/error-when-launching-aws-cloud-shell-unable-to-start-the-environmenthttps://repost.aws/questions/QUl4tcVsElQZGEpnJ7JaCJfw/unable-to-start-cloudshellDoes anyone know what could be causing this?FollowComment"
CloudShell: Unable to start the environment - CREATION_FAILED
https://repost.aws/questions/QUGWmmpgPNRsaL1v8t9dmM1w/cloudshell-unable-to-start-the-environment-creation-failed
true
"2Accepted AnswerIf anyone else encounters this issue, there's nothing you can do on your end, you have to talk to AWS support. Even though this was a technical issue and I am on a basic plan that doesn't include technical support, I managed to get the issue resolved by selecting creating a case under:Service: AccountCategory: Other account issuesThe support was very helpful and I got the issue sorted out within 12 hours or so.CommentSharerePost-User-5174146answered 7 months agoJamison Tuesday 4 months agoI've tried this, and they were fast to respond, but nothing that they suggested seemed to help, it was was all of the same suggestions that I've found everywhere else unfortunately.Share0Have you created a support case to report the issue? I recommend opening the support case especially if you have a brand-new AWS account.CommentShareTaka_Manswered 7 months agorePost-User-5174146 7 months agoYes, the issue has been resolved.Share"
"I have created a dynamic segment in amazon pinpoint with criteria and it fetched the endpoints matched with the criteria and then used this segment in my journey.Journey use contact center (Amazon Connect) for delivering the msg (Voice), journey runs and endpoints/numbers start being dialed. perfect!Now I want to check/see the outcome of endpoints entered in the journey as how many endpoints/contacts dialed and what are the outcomes for every endpoint/contact e.g. status Connected when customers picked the call, status Missed when customers missed the call.Any API? or why out? can we set a variable as a metric or user attribute in a segment and then use it later for checking the outcome?FollowComment"
How to get the outcome of an endpoint entered in the amazon pinpoint journey?
https://repost.aws/questions/QU07VLdFf2QueRArPeXl8jNQ/how-to-get-the-outcome-of-an-endpoint-entered-in-the-amazon-pinpoint-journey
false
"0Hi,Thanks for your patience as we worked with the Pinpoint team regarding your query. You can use Journey metrics. In this page, the Contact center metrics can be found under "Activity-Level Engagement Metrics" -> "Contact Center Activity"You can also follow the documentation here in our blog post to configure Eventbridge to log Amazon Connect Contact Events and send it to a target. In the blog the Contact event is sent to the Cloudwatch but you can also configure a different target to process the log event such as a Lambda function which can further process the event.In the above blog post, please check under Step 6: Setting up Amazon EventBridge to log events in Amazon CloudWatch for tracking the calls that are handled by Amazon Connect.. You can use the campaignId as your reference when searching the events. Please check Step 9: Tracking Amazon Connect outbound calls in Amazon CloudWatch Logs for the sample events.I hope this information helps. Please let us know if you may have any other questions. We will be happy to help you.CommentShareSUPPORT ENGINEERRyan_Aanswered 9 months ago"
"Hi, I'm being charged up to USD 16,000 with AWS SageMaker when I'm not using anything. I keep receiving suspicious activity notification with my account. What should I do to terminate this SageMaker billing? Should I close my account?FollowComment"
Suspicious Billing with SageMaker
https://repost.aws/questions/QU31OY4Rt9TVeEJBL4ctmgnA/suspicious-billing-with-sagemaker
false
"-1If you identify suspicious activity on your account, you shouldIdentify any unauthorized actions taken by the AWS Identity and Access Management (IAM) identities in your account.Identify any unauthorized access or changes to your account.Identify the creation of any unauthorized resources or IAM users.Steps to perform these activities can be found here:https://aws.amazon.com/premiumsupport/knowledge-center/potential-account-compromise/You should as well look at all the resources in your account and identify/terminate the ones that are not legitimate.AWS cost explorer can help you identify where these Sagemaker charges are and you can terminate them:https://aws.amazon.com/aws-cost-management/aws-cost-explorer/Open a case with support if you have any question. Whatever support level you currently have you can always create a Billing/account case to understand any past or upcoming charges.Once this is done, make sure you apply security best practices to protect your aws account :https://aws.amazon.com/premiumsupport/knowledge-center/security-best-practices/CommentShareJean Malhaanswered 10 months ago"
"HiI would like to create a Notebook to be shared with the team.And I followed this link to configure my user with required roles: AmazonRedshiftQueryEditor and AmazonRedshiftReadOnlyAccesshttps://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-access-control-identity-based.html#redshift-policy-resources.required-permissions.query-editor-v2But, I can't add policies to this role I am using to login...Is there any other way, I can do this?thanksCKFollowComment"
How to share notebooks in Redshift Editor?
https://repost.aws/questions/QUW9Xjh81mQaqaPoTDWFgSHw/how-to-share-notebooks-in-redshift-editor
true
"1Accepted AnswerThere are different ways to set up policies as described in this document.https://docs.aws.amazon.com/singlesignon/latest/userguide/troubleshooting.html#issue4Open the IAM Identity Center in the AWS Organizations parent account and add the policy to the permission set or follow the steps below to create a permission set.https://docs.aws.amazon.com/singlesignon/latest/userguide/howtocreatepermissionset.htmlIf this answer leads to a resolution, please approve the answer for the betterment of the community.CommentShareEXPERTRiku_Kobayashianswered a month ago"
"I have confirmed the metadata loaded for the integration specifies<md:IDPSSODescriptor WantAuthnRequestsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">However, the generated AuthnRequest from cognito is not signed.FollowComment"
How to get Cognito SAML integration to sign AuthnRequest?
https://repost.aws/questions/QU2CXBvj4yT86koJsGr2I-eQ/how-to-get-cognito-saml-integration-to-sign-authnrequest
true
"1Accepted AnswerHi,Cognito doesn't support AuthnRequest signing at this time. The assertion consumer endpoint for Cognito user pool doesn't change for the user pool (unless you change the user pool domain), so is the SP entity Id. These values must be per-configured in the IdP and usually if the AuthnRequest has any different values the request will be rejected by the IdP.More details on federating to SAML IdP from Cognito user pool.CommentShareEXPERTMahmoud Matoukanswered a year ago13ogrammer 9 months agoHi Mahmoud,Is "AuthnRequest signing" in the Cognito User Pool roadmap? If yes, when is it likely to be released?Cheers!Share"
"Hello,I would like to backup my instances and s3 buckets completely. I attached the needed permision for AWSBackupDefaultServiceRole role. Still there is a problem and i can not figure it out. Any guess?ROLECLOUDTRAILFollowComment"
AWS Backup IAM Issue Error on Cloudtrail InvalidParameterValueException
https://repost.aws/questions/QUrrJ80Su7TrelxGkxp5CD9w/aws-backup-iam-issue-error-on-cloudtrail-invalidparametervalueexception
false
"0Hi there,There wasn't a lot of details in your post, but I'll make a few assumptions.I tried what I think is the same thing you tried: with an on-demand backup (EC2), and also got the error the first time. Then a few minutes later, I tried it a second time (same on-demand/EC2) and was successful.Reading in the AWS documentation (https://docs.aws.amazon.com/aws-backup/latest/devguide/iam-service-roles.html), it says under "Creating the default Service Role"... "3. Verify that you have created the AWSBackupDefaultServiceRole in your account by following these steps: a. Wait a few minutes...."It appears there is a delay the first time the role is created.Also, I noticed that after creating the role in an EC2 backup, it doesn't add the S3 permissions thereafter. After EC2, I tried an initial S3 backup which failed. I manually added the AWSBackupServiceRolePolicyForS3Backup (and AWS...S3Restores) policies (as you show in your screenshot), and then ran an S3 backup. It's been running for 15 minutes, whereas the initial backup failed almost immediately. I assume this one will be successful.Hope this helps!CommentShareRob_Elanswered 9 months ago"
"Setting user attributes values in cognito user pool, causes those attributes to be present in the IDToken.How can I select which user attributes actually go into the ID Token? Via lambda trigger and no UI or API operation for that definition?Tks,DDFollowComment"
Amazon cognito - user attributes in ID Token
https://repost.aws/questions/QUFqpGvUqrSk6iZRZBsRdjCg/amazon-cognito-user-attributes-in-id-token
false
"1Hello,In order to stop an attribute from being present in the ID token, you need to unselect that attribute from the list of readable attributes for the app client.Please expand the "Attribute permissions and scopes" section in document [1] for reference to attribute read/write settings in an Cognito user pool app client. You need to modify the "Set attribute read and write permissions" settings (if you are using old Cognito console) or the "Edit attribute read and write permissions" settings (if you are using new Cognito console).After you unselect an attribute from this list, that attribute will no longer be present in the ID token.I believe the information is helpful to you. In case you have any further queries/concerns then please let me know.--References--[1] https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.htmlCommentShareSUPPORT ENGINEERTarit_Ganswered 6 months ago"
"I performed a windows server upgrade from server 2012 r2 Standard to server 2019 standard on a EC2 instance type t2 Large and now i am unable to launch / reboot / start the instance as i get the error "The instance 'i-02fxxx' is not in a state from which it can be started". This was an in-place upgrade which suppose to have all files intact and Windows server 2019 Std (desktop experience) installed, during the upgrade the last captured screenshot showed "Applying updates 91% completed"Instance status checks"Instance reachability check failed" ------logs ----"2022/06/04 07:32:34Z: Still waiting for meta-data accessibility...2022/06/04 07:34:24Z: Still waiting for meta-data accessibility..."any one experienced the same issue here ? any help would be greatly appreciated. thanksFollowComment"
Ec2 instance Windows server 2019 upgrade
https://repost.aws/questions/QUuPmpWYonSB27tULtwp-bfA/ec2-instance-windows-server-2019-upgrade
false
"0Hello,I'm sorry for the frustration this must have caused.Please check out these troubleshooting guide that may helps:https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/os-upgrade-trbl.htmlhttps://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/common-messages.html#metadata-unavailableRegards,Franky ChenCommentShareFranky Chenanswered a year ago"
"Hi! I'm trying to import a number of existing resources into new Cloudformation stacks. I've been able to successfully import EC2 resources, VPC resources, etc. AFAIK, one of the defining features of a CloudFormation import is that the resources are left unchanged, they're only added to the stack. It pretty much says so in the AWS Console CF Import:"During this import, you need to provide a template that describes the entire stack, including the target resources. If existing stack resources are modified in the prepared template, the import fails."The thing is, when I try to import an RDS resource, it requires me to add several parameters, such as DBInstanceClass, AllocationStorage, MasterUsername, MasterPassword, etc. Two things here:1.I've seen the CF import modify RDS settings such as MultiAZ and, more troublesome, MasterPassword, both from CLI and console.2.If no modification is done to a resource when importing it, why is it asking for a number of parameters that have already been defined for the resource? I haven't seen EC2 import do it that way; EC2 only asks you for the Instance ID, Deletion Policy and that's it.Am I doing anything wrong here? What I would like to do is to add those RDS resources without making any changes at all to them, just like I do with EC2.I know I could add the existing RDS resources to the template with the exact same settings that the resource already has and it shouldn't be a problem, but still, it doesn't look that polished to me, and I'm thinking I may be doing something wrong here.Thanks!FollowComment"
Cloudformation Import of RDS resources modifying settings
https://repost.aws/questions/QU_ijEBvheRxKH3kzZbuG-Xg/cloudformation-import-of-rds-resources-modifying-settings
false
Formatting Lex output StringI'm using the C# implementation for Lambda and my code looks like this:Step1: string response = "Hi.. I'm Jenie your virtual assistant.'\n'Test";Step2: string response = "Hi.. I'm Jenie your virtual assistant.\nTest";FollowComment
Formatting Lex output String
https://repost.aws/questions/QUkBdmrBB3S1-5LgX9kj057g/formatting-lex-output-string
false
"0Hi Rakesh,Amazon Lex requires a very specific JSON output response from Lambda. Please look at the following Lex documentation to see the expected response, and make sure your C# code formats the output in a JSON format.Link: https://docs.aws.amazon.com/lex/latest/dg/lambda-input-response-format.html#using-lambda-response-formatCommentShareLukas_Aanswered 10 months agoSUPPORT ENGINEERAWS_SamMreviewed 10 months ago"
"I have used API gateway to build business logic for my app that invokes lambda function. For security assurance, I have generated a VAT report of the base URL of API from my cyber security expert. A total of 9 Vulnerabilities have been detected including Four Medium, three low-level, and two informational-level vulnerabilities have been identified.(CSP) Wild Card DirectiveContent Security Policy (CSP) Header Not SetCross-Domain MisconfigurationMissing Anti-clickjacking HeaderServer Leaks Information via “X-Powered-By” HTTP Response Header Field(s)Timestamp Disclosure – UnixX-Content-Type-Options Header MissingCharset MismatchRe-examine Cache Directiveshow can remove these all Vulnerability ?is there a need to set or define custom headers? ( if yes then where and how I can do that, either be in API Gateway console or lambda script or in my client or app side code where this API Gateway base URL is invoking ) ?FollowComment"
Handel custom header in AWS API Gateway ?
https://repost.aws/questions/QU-4cYPZUVQdG4J3FAqkv21Q/handel-custom-header-in-aws-api-gateway
true
"1Accepted AnswerIt depends on your requirements and whether you expect the headers to be sent as part of the client request or need to add the headers before the request hits the API Gateway.If you need to block client requests if some headers are missing, you can associate a WAF ACL with the API Gateway and define rules to block requests without mandatory headersLook at these two for guidancehttps://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-aws-waf.htmlhttps://aws.amazon.com/premiumsupport/knowledge-center/waf-block-http-requests-no-user-agent/If the requirement is that the headers need to be added to the request before the request reaches the API Gateway even if the client did not send the headers, you can do so using Lambda@Edge with a Cloudfront distribution in front of your API Gateway.Look at these for guidancehttps://aws.amazon.com/blogs/networking-and-content-delivery/adding-http-security-headers-using-lambdaedge-and-amazon-cloudfront/ (this example shows response headers but you can use similar concepts to the request headers with some changes)https://docs.amazonaws.cn/en_us/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-how-it-works-tutorial.htmlSome examples of Lambda@Edge functions - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.htmlCommentShareEXPERTIndranil Banerjee AWSanswered a year ago0Lambda@Edge functions with CloudFront work fine for my scenario.I have added up one additional thing that may more the easiest way to remove security headers vulnerabilities. I have created and deployed the Express app to LambdaBy default, Express.js sends the X-Powered-By response header banner. This can be disabled using the app.disable() method:app.disable('x-powered-by')and also apply headers on the express appapp.use(function(req, res, next) { res.header('Strict-Transport-Security', `max-age=63072000`); res.header('Access-Control-Allow-Origin', `null`); res.header('Referrer-Policy', `no-referrer`); res.header('Permissions-Policy', `microphone 'none'; geolocation 'none'`); res.header('x-frame-options', `DENY`); res.header('Content-type', `application/json; charset=UTF-8`); res.header('Cache-Control', `no-store`); res.header('X-Content-Type-Options', `nosniff`); return next();});CommentSharerePost-User-8228753answered a year ago"
"I have a .csv file in s3://<bucket-name>/SRC1/TBL1/2022/10/04/.Created a crawler on this bucket with the following configuration optionsCrawl new sub-folders onlyCreate a single schema for each S3 pathTable level - 3teaget data base 'sample-db'When I trigger my crawler, expectation was to create a new table 'TBL1' in 'sample-db'. But it did not create the new table.Then I updated the config option to 'Crawl all sub-folders', then the table was created.Question: Can we make crawler to create the table with option Crawl new sub-folders onlyTIAFollowComment"
"Glue crawler creating a table only with 'Crawl all sub-folders' option, but not with 'Crawl new sub-folders only'"
https://repost.aws/questions/QU-njXBearS0uVwUASYMaX_A/glue-crawler-creating-a-table-only-with-crawl-all-sub-folders-option-but-not-with-crawl-new-sub-folders-only
false
"I have one Global DXGW. One VIF each in us-east-1, us-east-2, eu-west-1, ap-northeast-1 and eu-central-1. I also have VPC's in each of the regions. I want to engineer the BGP routes in such a way that if us-east-1 is not available, all routes will use us-east-2. if eu-west-1 is not available, all routes will use eu-central-1is it possible to achieve this failover scenario with one global direct connect gatewayus-east-1 = primaryus-east-2 = secondaryeu-west-1 = primaryeu-central-1 = secondaryap-northeast-1 = primaryus-east-2 = secondary.FollowComment"
Inter-region BGP route failover
https://repost.aws/questions/QUll2u2uymR3OWh4qVqyWuMg/inter-region-bgp-route-failover
false
"0I think from your question that you have physical Direct Connect circuits in us-east-1 and us-east-2; and another pair in eu-west-1 and eu-central-1. If that's correct:Short answer: Yes, you can absolutely do this. There might be a catch though.For each pair of Direct Connect circuits, you advertise the same prefixes on both; on the secondary link you use AS-Path prepending to make it a longer path and therefore less attractive; but should the primary link fail then the secondary will be active as it is the only path available.The catch: It's not clear from the question how your on premises network is structured.Here I'm assuming you have a specific set of networks in your North America locations; and another set of networks in your European locations.If you only need the AWS North America regions to reach the North America locations and the same for Europe then you're good to go - no issues.If you have a global WAN and you want to use (say) the European connections as a backup should the two Direct Connect services in North America fail then you can do this: But you must ensure that you only select a single primary link for each on premises network. You don't want to advertise the North America networks to AWS with the same cost in Europe as you do in North America - by using a single Direct Connect Gateway it will be difficult to ensure that the North America links are preferred.To put this another way: In North America you should advertise the on premises networks to AWS as above; one primary and one secondary. In Europe you should advertise the North America networks to AWS with even more AS-Path prepending so that the North America links are preferred. The reverse is true for the European links.Again, I'm assuming quite a lot here about your network outside of AWS.CommentShareEXPERTBrettski-AWSanswered a year agoSonet a year agoThere is only one global DXGW. All the VIFs from the 4 regions are connected to the same DXGW. Same prefix is advertised from all the 4 regions. There is one data center in all the regions and one VPC in each region in AWS. There is no primary or secondary. One VIF per region. So, if us-east-1 fails, us-east-2 will become available. in North America. us-east-1 will be primary. us-east-2 secondary. In EU, eu-west-1 is primary, if it fails, eu-central-1 will become available. So I don't think this failover request can be done with one global DXGW. once you modify the route in this one global DXGW. it will affect all the routes both in the America's and Europe.Share"
"Dear Amazon Team,Can you please fix the "Samsung Galaxy J7 (2018)"? There is some fullscreen overlay whenever EditText gets focus and it shows this text on white screen:"Uma imagem vale mais que 1000 palavras"Please check:https://us-west-2.console.aws.amazon.com/devicefarm/home?#/projects/dc2068f7-4b09-417c-9909-b77f559c9641/runs/8d328dfd-fcef-4c36-9417-edc9e9ab22af/jobs/00002/suites/00002/tests/00000https://us-west-2.console.aws.amazon.com/devicefarm/home?#/projects/dc2068f7-4b09-417c-9909-b77f559c9641/runs/8d328dfd-fcef-4c36-9417-edc9e9ab22af/jobs/00002/suites/00002/tests/00000Thank you.Regards,DraganFollowComment"
Broken soft keyboard on Samsung Galaxy J7 (2018)
https://repost.aws/questions/QUN-xtiSerQwKgGWduy3kqkA/broken-soft-keyboard-on-samsung-galaxy-j7-2018
false
"0Hi,The issue was resolved for the device. Please check.Thanks,ShreyansCommentSharesktdfanswered 4 years ago"
"Hello,I had configured my instance with two website and two environment for them, everything worked and today my instance is not working anymore.http://35.181.49.244/ redirect to apache page now.Websites :https://ec2022.mozvox.com/https://www.mozvox.com/have a connexion failed.When I try to restart services with ssh :sudo /opt/bitnami/ctlscript.sh restartRestarting services..System has not been booted with systemd as init system (PID 1). Can't operate.Failed to connect to bus: Host is downWhen I try to enable sshd :sudo systemctl enable sshdCreated symlink sshd.service → (null).Failed to enable: Too many levels of symbolic links.My hard disk is not empty :df -hT /dev/xvda1Filesystem Type Size Used Avail Use% Mounted on/dev/xvda1 ext4 20G 4.6G 15G 25% /What can be the problem ?Thanks for your reply.FollowComment"
My instance LightSail crash - Connexion failed on my https websites
https://repost.aws/questions/QU8na57kzkTmOrAy9973jkyg/my-instance-lightsail-crash-connexion-failed-on-my-https-websites
false
"0Hello Customer,It seems like you installed an additional Apache package on top of your Bitnami image. Bitnami already includes a version of Apache, and when users manually install Apache via the OS package Manager, the Apache package will run on the same port 80 as the Bitnami image, hence the reason you are getting redirected to the Apache page. To resolve this, you would need to uninstall any additional Apache packages that are present in your system and then you can try to run the Bitnami again by using the following command: /opt/bitnami/ctlscript.sh start.Hope this Helps!CommentShareHanzla_Yanswered 10 months agoSUPPORT ENGINEERAWS_SamMreviewed 10 months agoEl Pollo 10 months agoThanks for reply but I haven't installed an additional Apache...At the time, I reinstalled all the websites... But it gave me the problem again after a while without me doing any manipulation... I still have the same problem on my new instances...I don't renew my certificates with let's encrypt because I thought it was automatic.I restart apache and it's fixed now for "connexion failed" but i had always the error :sudo /opt/bitnami/ctlscript.sh restart apachesudo /opt/bitnami/ctlscript.sh stopStoping services..System has not been booted with systemd as init system (PID 1). Can't operate.Failed to connect to bus: Host is downI can't renew my certificates...[mozvox.com] [mozvox.com] acme: error presenting token: could not start HTTPS server for challenge: listen tcp :443: bind: address already in useI don't understandShare0I do a bitnami dianostic with integrated tool :01fba2e5-d9e7-f630-3dba-5f28983213fcCommentShareEl Polloanswered 10 months ago0Hello, the error messages you are receiving when using the /opt/bitnami/ctlscript.sh script indicate that the system is not using systemd. This may mean that you installed your Bitnami application on a Linux operating system that predates systemd and uses SysVinit, or Windows Subsystem for Linux. Without visibility into your resources, I am not able to determine exactly why this is occurring. I encourage you to open a support case so that our AWS Support Engineers can investigate this and assist you further.CommentShareSUPPORT ENGINEERAWS_SamManswered 10 months ago"
"I have created a REST API on Amazon API Gateway. When in my dashboard I try to click on the API I have created , in the APIs page, first the screen becomes white, and then it goes back to the APIs page, without letting me configure the API I have created.Do you know if I have to add a policy and which one eventually?FollowComment"
Cannot interact with my REST API
https://repost.aws/questions/QUHM8XGeRETRe6_f90cW7t9Q/cannot-interact-with-my-rest-api
false
"0It seems like your ID might need API gateway as a trusted entity. Please try these stepsOpen the IAM console.On the navigation pane, choose Roles.Choose Create role.In the Select type of trusted entity section, choose AWS service.In the Choose a use case section, choose API Gateway.In the Select your use case section, choose API Gateway.Choose Next: Permissions. Just Click NextAdd tags (optional), and then choose Next: Review.For Role name, enter a name for your policy. For example: api-gateway-rest-apiChoose Create roleAdd role to your User ID.CommentShareAnanth Tirumanuranswered 8 months agoMatteo Sartoni 8 months agoWhat do you mean with "Add role to your User ID"? I have asked the administrator of my account and he attached the role to a Policy linked to my account, but the problem is still thereShare"
"I have a customer who is trying to use ODBC to connect to Athena. They have followed this documentation for "Configuring ODBC Connections on a NonWindows Machine" setup: https://s3.amazonaws.com/athena-downloads/drivers/ODBC/SimbaAthenaODBC_1.0.5/Simba+Athena+ODBC+Install+and+Configuration+Guide.pdfConnectivity works fine on macOS when AuthenticationType=IAM credentials.Have issues getting it to work with AuthenticationType=IAM Profile as suggested here https://www.simba.com/products/Athena/doc/ODBC_InstallGuide/mac/content/odbc/ath/configuring/authenticating/iamprofile.htmI have re-produced Customer setup. The following is the error that I get while using AuthenticationType=IAM Profile08S01[Simba] [DriverIAMSupport] (8600) Connection Error: No AWS profile found: defaultHas anyone come across this error ? Kindly help, thanks.FollowComment"
Need help troubleshooting Athena/ODBC error
https://repost.aws/questions/QUYHP6opqITCuOO9B6yO0rPg/need-help-troubleshooting-athena-odbc-error
true
"0Accepted AnswerThis error occurs when the profile set in the ODBC AwsProfile keyword is missing from .aws/credentials. In this particular case you've either set AwsProfile=default or didn't specify a profile and its its looking for the profile named default in the credentials file.You should configure the credentials in the in .aws/ credentials files according the cli documentation here.then setup or odbc.ini similar to:[ODBC Data Sources]Athena=Simba Athena ODBC Driver[Athena]Driver=/Library/simba/athenaodbc/lib/libathenaodbc_sbu.dylibAwsRegion=us-west-2S3OutputLocation=s3://query-results-bucket/testfolder-1AuthenticationType=IAM ProfileAwsProfile=someprofilenameUPDATEWhile the information above is correct, in this particular instance the issue was with Excel and the MacOS Application Sandboxes. Microsoft Excel is Sandboxed and is unable to read the path ~/.aws/credentials to retrieve the profile credentials. A work around is to create a .aws directory inside the Excel sandbox directory and then hard link the credentials file. This will work around the issue and still use the original credentials file.mkdir ~/Library/Containers/com.microsoft.Excel/Data/.awsln ~/.aws/credentials ~/Library/Containers/com.microsoft.Excel/Data/.aws/CommentShareAWS-User-8643885answered 3 years ago"
We have a use case where in we have to bulk download data of around 50Gbs from S3 to users' local.We have planned of using a ECS to pick up files till we fetch around 10Gbs from the source S3 and zip them and then upload that zip to another S3 bucket.We would have to perform this operation multiple times till we generate all the zips.Is there a way by which we can generate the zip of the whole data in one go?Also how do we then stream this large zip file from our destination S3 to users' local?FollowComment
How to efficiently perform streaming of large zip files?
https://repost.aws/questions/QUYFOaaDZSQEmqIm-9_0jLqw/how-to-efficiently-perform-streaming-of-large-zip-files
false
"0If you create an EC2 instance with enough memory then it should be possible to copy the files onto the instance and compress them into a single file. However, if speed is the goal then parallelizing the compression of sets of the files would probably be faster and your ECS approach (perhaps with smaller chunks and more containers) would work well.If this is an ongoing process then perhaps a Lambda function could be used to compress all new files and transfer them directly?CommentShareAlex_Kanswered 3 months agorePost-User-2099128 3 months agoAlso to fetch the zip to clients' local i believe we can use S3 transfer manager of AWS SDK. But any idea around how much data can be transferred in a go using transfer manager?Share0Hi,For your question: Is there a way by which we can generate the zip of the whole data in one go?Currently there is no S3 provided functionality to do this. This must be handled via individually the objects from S3 and creating a ZIP archive. If you want to do it within the AWS Cloud, as an example, you could use Lambda (if within the timeout) or ECS or EC2 or AWS BatchFor your question: How do we then stream this large zip file from our destination S3 to users' local?It is possible to download large files from Amazon S3 using a browser by using the AWS SDKPlease refer to these articles for understanding/examples:https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/s3-browser-examples.htmlhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/example_s3_Scenario_UsingLargeFiles_section.htmlThanksCommentShareRamaanswered 3 months ago"
"About 12:45 eastern time, connectivity between our servers had issues. HTTP connections started timing-out, about 4% of the time. Now (around 13:00), servers are becoming unreachable via ping, ssh, and http. Interestingly, udp seems to be working (mosh).I booted a new server, same problems.Connectivity within us-west, and us-east is a problem, as well as connectivity between them. I've had problems with connecting from outside aws as well.Connectivity seems worse on us-west.FollowComment"
network connectivity us-west and us-east
https://repost.aws/questions/QUCr6MZoPiRmipxwgFonuO_w/network-connectivity-us-west-and-us-east
false
"0Turns out it was the DNS SOA. register.com switched us to ztomy.com nameservers, and failed to completely switch us back. Of course it was "only sometimes". So, every 5 minutes or so, we'd get the wrong DNS resolution. So, nothing to do with AWS networking.CommentSharec2agroveranswered 3 years ago"
"I noticed the last version of StepFunctionsLocal is 1.12.0 (~ 5 months ago) which included support for the new Intrinsic Functions.Will the service be updated to include new futures like the distributed map states?FollowCommentThabo_M SUPPORT ENGINEERa month agoWe greatly value your feedback as it is the primary catalyst for improving our services. I have noted the feature request with the service team. However, I am unable to give you a precise timeline for when this feature will be considered or implemented. Any updates to our services can be found on our "AWS What's New" page (https://aws.amazon.com/blogs/aws/), which tracks new additions to the services.ShareEmma Moinat a month agoWould also like to know when this would be getting an update. It is getting quite out of date now. Certain things will be deprecated soon from step functions but their replacements are not even available in the local image!Share"
Step Functions Local update?
https://repost.aws/questions/QUQqn10Za6Seu_PvhUSgeXXA/step-functions-local-update
false
Hello!I'm trying to find a way to change the default styling for the Captcha page but can not find any way to do that. Is there a way at least to change the color palette?Thank youFollowComment
AWS WAF CAPTCHA Customizing
https://repost.aws/questions/QU2egpE7SRRG6SkljyTxs27g/aws-waf-captcha-customizing
false
"0Hi, with the Captcha action, you can add custom headers to the web request. AWS WAF prefixes your custom header names with x-amzn-waf- when it inserts them. For the CAPTCHA action, AWS WAF only applies the customization if the request passes the CAPTCHA inspection.You can refer this document https://docs.aws.amazon.com/waf/latest/developerguide/waf-captcha-how-it-works.htmlCommentSharesouravanswered a year agorePost-User-3437456 a year agoI've read this article and unfortunately I don't see any options to change CAPTCHA page styling. Are there any specific custom headers that I need to add to change styles/colors?Share0AWS Web Application Firewall customers can now use the AWS WAF Captcha JavaScript API for enhanced control over the Captcha workflowshttps://aws.amazon.com/about-aws/whats-new/2023/04/aws-waf-captcha-javascript-api-support/This will allow you to improve the Captcha customer experience by embedding Captcha problems in your existing webpagesCommentShareAWS-User-3413787answered 21 days ago"
"I running on :Deep Learning AMI (Ubuntu 18.04) Version 56.0 - ami-083abc80c473f5d88, but I have tried several similar DLAMI.I am unable to access CUDA from pytorch to train my models.See here:$ apt list --installed | grep -i "nvidia"WARNING: apt does not have a stable CLI interface. Use with caution in scripts.libnvidia-compute-460-server/bionic-updates,bionic-security,now 460.106.00-0ubuntu0.18.04.2 amd64 [installed,automatic]libnvidia-container-tools/bionic,now 1.7.0-1 amd64 [installed,automatic]libnvidia-container1/bionic,now 1.7.0-1 amd64 [installed,automatic]nvidia-container-toolkit/bionic,now 1.7.0-1 amd64 [installed]nvidia-cuda-dev/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic]nvidia-cuda-doc/bionic,now 9.1.85-3ubuntu1 all [installed,automatic]nvidia-cuda-gdb/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic]nvidia-cuda-toolkit/bionic,now 9.1.85-3ubuntu1 amd64 [installed]nvidia-docker2/bionic,now 2.8.0-1 all [installed]nvidia-fabricmanager-450/now 450.142.00-1 amd64 [installed,upgradable to: 450.156.00-0ubuntu0.18.04.1]nvidia-opencl-dev/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic]nvidia-profiler/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic]nvidia-visual-profiler/bionic,now 9.1.85-3ubuntu1 amd64 [installed,automatic]And it shows I have Nvidia. However, when I run python:~$ bpythonbpython version 0.22.1 on top of Python 3.8.12 /home/ubuntu/anaconda3/envs/pytorch_p38/bin/python3.8>>> import torch.nn as nn>>> import torch>>> torch.cuda.is_available()FalseEven after I re-install nvidiasudo apt install nvidia-driver-455I get this:(pytorch_p38) ubuntu@ip-172-31-95-17:~$ nvcc --versionnvcc: NVIDIA (R) Cuda compiler driverCopyright (c) 2005-2020 NVIDIA CorporationBuilt on Mon_Oct_12_20:09:46_PDT_2020Cuda compilation tools, release 11.1, V11.1.105Build cuda_11.1.TC455_06.29190527_0(pytorch_p38) ubuntu@ip-172-31-95-17:~$ bpythonbpython version 0.22.1 on top of Python 3.8.12 /home/ubuntu/anaconda3/envs/pytorch_p38/bin/python3.8>>> import torch>>> torch.cuda.is_available()FalseDoes anyone know how to get pytorch to be able to access cuda? Any help is greatly appreciatedFollowCommentChris Pollard a year agoWhat instance type are you using?ShareNathaniel Ng a year agoWhich AMI version are you using, and are you by any chance using a g5-series instance?Share"
DLAMI does not have CUDA/NVIDIA (and cannot access cuda from pytorch)
https://repost.aws/questions/QUKiD2E4kVSvyq9cwhs0iPTA/dlami-does-not-have-cuda-nvidia-and-cannot-access-cuda-from-pytorch
false
"Hello,I'm having trouble with Python OpenCV library, running on AWS Lightsail container instance.Some information:It is running on python:3.7 Docker image.Python Flask appAWS Lightsail container instanceUsing following packages: linkUses opencv-contrib-python-headless==4.5.4.60 for OpenCV.Error image: linkWhen trying to compare two images, I'm receiving HTTP status code of 502 Bad Gateway, which is very strange.Seems to work perfectly on my Windows machine locally, but on this Linux image it does not work.`from cv2 import cv2import logginglogger = logging.getLogger()def compare_two_images(image_to_compare_file, image_to_compare_against_file):# Image imports# Featureslogger.warning("image_to_compare_file " + image_to_compare_file)logger.warning("image_to_compare_against_file " + image_to_compare_against_file)sift = cv2.SIFT_create() logger.warning("SIFT created " + str(sift is None))# QueryImageimg1 = cv2.imread(image_to_compare_file, cv2.IMREAD_GRAYSCALE) logger.warning("IMG1 read created " + str(img1 is None))# Find the key points and descriptors with SIFTkp1, desc1 = sift.detectAndCompute(img1, None)logger.warning("DETECT AND COMPUTE " + str(kp1 is None) + " " + str(desc1 is None)) img2 = cv2.imread(image_to_compare_against_file, cv2.IMREAD_GRAYSCALE)logger.warning("IMG2 read created " + str(img2 is None))kp2, desc2 = sift.detectAndCompute(img2, None)logger.warning("DETECT AND COMPUTE " + str(kp2 == None) + " " + str(desc2 is None))# BFMatcher with default paramsbf = cv2.BFMatcher()matches = bf.knnMatch(desc1, desc2, k=2)# Apply ratio testgood = []for m, n in matches: if m.distance < 0.55 * n.distance: good.append([m])`It crashes on kp1, desc1 = sift.detectAndCompute(img1, None) and produces 502 Bad Gateway.Then, on some other endpoints I have in my Python Flask app, it produces 503 Service Temporarily Unavailable for a very times.After that, I can see that images were deleted.Any help is appreciated.FollowCommentben-from-aws a year agoI think you accidentally submitted the question twice? See https://repost.aws/questions/QUHTcwJWsnTimBF6fduWK3Hg/python-flask-open-cv-library-does-not-work-produces-http-code-502-bad-gateway-when-trying-to-compare-imagesShare"
"Python Flask: OpenCV library does not work, produces HTTP code 502 Bad Gateway when trying to compare images"
https://repost.aws/questions/QUhnQ4gMZxReic9mp15QloiQ/python-flask-opencv-library-does-not-work-produces-http-code-502-bad-gateway-when-trying-to-compare-images
false
"0Based on the error, it seems like you ALB is unsure how to process the connection. How is your ALB configured?While 5XX errors do appear to be commonplace for this library, I believe your issue is related to load balancer configuration more than anything.CommentShareDan_Fanswered a year agoAWS-User-4298801 a year agoTo be honest, I don't have anything configured on ALB side, using this tutorial:https://aws.amazon.com/getting-started/hands-on/serve-a-flask-app/Share"
"For our automation flow I need to find and then subscribe to software in Marketplace. I would much rather find appropriate service library (something similar to AmazonCloudFormation, AmazonRDS etc). Is there such library and if not what other options to subscribe?Also I wanted to avoid managing token http requests myself, hence the use of libraries.FollowComment"
Marketplace: Need to Subscribe to Software using Java SDK
https://repost.aws/questions/QU8U27q49eR1qqMvuyvX59aA/marketplace-need-to-subscribe-to-software-using-java-sdk
false
"0There is not an API available for AWS Marketplace for automating subscriptions to products today. Since subscriptions can have costs and associated EULAs, they must be manually subscribed to by a principal with the appropriate IAM permissions to subscribe from a given account.CommentSharethat_mikeanswered 9 months ago"
"helloI created aws greengrass v2 ml component.I want to run the ml model on Raspberry Pi with greengrass installed.So I wrote the ml component and deployed it.The result I expect is that the ml code I wrote is executed on Raspberry Pi.This is a custom component.I referred to https://docs.aws.amazon.com/greengrass/v2/developerguide/ml-customization.html this link.But there is a problem.If I deployed the components, say that the Deploy is complete. But there's an error.Looking at the log, it seems that there is a problem opening the virtual environment(venv), or it is because the tensorflow is not downloadedI want to write a script that opens the virtual environment properly in the recipe of the component.The recipe I wrote looks like this.{"RecipeFormatVersion": "2020-01-25","ComponentName": "com.example.jamesML","ComponentVersion": "1.0.5","ComponentType": "aws.greengrass.generic","ComponentDescription": "Capstone Design james machine learning.","ComponentPublisher": "Me","ComponentConfiguration": {"DefaultConfiguration": {"accessControl": {"aws.greengrass.ipc.mqttproxy": {"com.example.jamesML:mqttproxy:1": {"policyDescription": "Allows access to publish via topic ml/dlr/image-classification.","operations": ["aws.greengrass#PublishToIoTCore"],"resources": ["ml/dlr/image-classification"]}}}}},"Manifests": [{"Platform": {"os": "linux","architecture": "arm"},"Lifecycle": {"setEnv": {"Script": "cd venv&&activate venv"},"run": {"RequiresPrivilege": "true","script": "pip install opencv-python;pip install tensorflow==2.3;python3 james.py"}},"Artifacts": [{"Uri": "s3://greengrass-sagemaker-0930/james_ml.zip","Digest": "CY6f7pUyMRrgbxbKoDMig3GWQCJ4LvKyA5xnhGWGhlY=","Algorithm": "SHA-256","Unarchive": "ZIP","Permission": {"Read": "OWNER","Execute": "NONE"}}]}],"Lifecycle": {}}Can you solve the problem here?Is this part I wrote wrong?"setEnv": {"Script": "cd venv&&activate venv"},I want to know the answer. Please. Please.This is my greengrass v2 component log (Raspberry Pi)2022-10-13T10:01:44.004Z [INFO] (pool-2-thread-28) com.example.jamesML: shell-runner-start. {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=STARTING, command=["pip install opencv-python;pip install tensorflow==2.3;python3 james.py"]}2022-10-13T10:01:45.326Z [INFO] (Copier) com.example.jamesML: stdout. Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple. {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=RUNNING}2022-10-13T10:01:45.329Z [INFO] (Copier) com.example.jamesML: stdout. Requirement already satisfied: opencv-python in /usr/local/lib/python3.9/dist-packages (4.6.0.66). {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=RUNNING}2022-10-13T10:01:45.373Z [INFO] (Copier) com.example.jamesML: stdout. Requirement already satisfied: numpy>=1.14.5 in /usr/local/lib/python3.9/dist-packages (from opencv-python) (1.23.4). {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=RUNNING}2022-10-13T10:01:45.901Z [WARN] (Copier) com.example.jamesML: stderr. WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=RUNNING}2022-10-13T10:01:47.470Z [INFO] (Copier) com.example.jamesML: stdout. Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple. {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=RUNNING}2022-10-13T10:01:48.909Z [WARN] (Copier) com.example.jamesML: stderr. ERROR: Could not find a version that satisfies the requirement tensorflow==2.3 (from versions: none). {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=RUNNING}2022-10-13T10:01:48.910Z [WARN] (Copier) com.example.jamesML: stderr. ERROR: No matching distribution found for tensorflow==2.3. {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=RUNNING}2022-10-13T10:01:49.161Z [WARN] (Copier) com.example.jamesML: stderr. python3: can't open file 'james.py': [Errno 2] No such file or directory. {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=RUNNING}2022-10-13T10:01:49.169Z [INFO] (Copier) com.example.jamesML: Run script exited. {exitCode=2, serviceName=com.example.jamesML, currentState=RUNNING}FollowComment"
greengrass v2 ml component
https://repost.aws/questions/QUFvQwq-inRuq6FH_1VyNwpg/greengrass-v2-ml-component
true
"1Accepted AnswerHi,your recipe seems wrong. setenv is used to set environment variables and in your case you are creating a variable named Script with value cd venv&&activate venv. Move the cd venv&&activate venv inside the Run lifecycle script. You should also avoid using "RequiresPrivilege": "true" and instead assign the correct permission to the the user executing the script (eg ggcuser).The fact that tensorflow is not found might be related to the fact that the virtualenv is not activated.MassimilianoCommentShareEXPERTMassimilianoAWSanswered 8 months agohyorim 8 months agoThank you for answering me!I did it as you told me.{"RecipeFormatVersion": "2020-01-25","ComponentName": "com.example.jamesML","ComponentVersion": "1.0.6","ComponentType": "aws.greengrass.generic","ComponentDescription": "Capstone Design james machine learning.","ComponentPublisher": "Me","ComponentConfiguration": {"DefaultConfiguration": {"accessControl": {"aws.greengrass.ipc.mqttproxy": {"com.example.jamesML:mqttproxy:1": {"policyDescription": "Allows access to publish via topic ml/dlr/image-classification.","operations": ["aws.greengrass#PublishToIoTCore"],"resources": ["ml/dlr/image-classification"]}}}}},"Manifests": [{"Platform": {"os": "linux","architecture": "arm"},"Lifecycle": {"run": {"script": "cd venv&&activate venv&&pip install opencv-python&&pip install tensorflow==2.3&&python3 james.py"}},"Artifacts": [{"Uri": "s3://greengrass-sagemaker-0930/james_ml.zip","Digest": "CY6f7pUyMRrgbxbKoDMig3GWQCJ4LvKyA5xnhGWGhlY=","Algorithm": "SHA-256","Unarchive": "ZIP","Permission": {"Read": "OWNER","Execute": "NONE"}}]}],"Lifecycle": {}}Is this how you told me to do it?Sharehyorim 8 months agoEven so, I have an error again.2022-10-13T12:33:21.675Z [INFO] (pool-2-thread-33) com.example.jamesML: shell-runner-start. {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=STARTING, command=["cd venv&&activate venv&&pip install opencv-python&&pip install tensorflow==2.3..."]}2022-10-13T12:33:21.755Z [WARN] (Copier) com.example.jamesML: stderr. sh: 1: cd: can't cd to venv. {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=RUNNING}2022-10-13T12:33:21.762Z [INFO] (Copier) com.example.jamesML: Run script exited. {exitCode=2, serviceName=com.example.jamesML, currentState=RUNNING}2022-10-13T12:33:23.308Z [INFO] (pool-2-thread-32) com.example.jamesML: shell-runner-start. {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=STARTING, command=["cd venv&&activate venv&&pip install opencv-python&&pip install tensorflow==2.3..."]}2022-10-13T12:33:23.329Z [WARN] (Copier) com.example.jamesML: stderr. sh: 1: cd: can't cd to venv. {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=RUNNING}2022-10-13T12:33:23.334Z [INFO] (Copier) com.example.jamesML: Run script exited. {exitCode=2, serviceName=com.example.jamesML, currentState=RUNNING}Sharehyorim 8 months ago2022-10-13T12:33:25Z [INFO] (pool-2-thread-32) com.example.jamesML: shell-runner-start. {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=STARTING, command=["cd venv&&activate venv&&pip install opencv-python&&pip install tensorflow==2.3..."]}2022-10-13T12:33:25.022Z [WARN] (Copier) com.example.jamesML: stderr. sh: 1: cd: can't cd to venv. {scriptName=services.com.example.jamesML.lifecycle.run.script, serviceName=com.example.jamesML, currentState=RUNNING}2022-10-13T12:33:25.026Z [INFO] (Copier) com.example.jamesML: Run script exited. {exitCode=2, serviceName=com.example.jamesML, currentState=RUNNING}This is my raspberry pie log.Could you answer this, too?I'm begging you.ShareGreg_B EXPERT8 months agoHi. What is creating the virtual environment? It seems it has not been created, hence you can't change directory into it. Typically it would be created in the Install lifecycle. Massimiliano has an example here: https://github.com/awslabs/aws-greengrass-labs-jupyterlab/blob/main/recipes/aws.greengrass.labs.jupyterlab.yamlShareMassimilianoAWS EXPERT8 months agoPlease note that to activate a virtual environment you should use . venv/bin/activate. Do not use source venv/bin/activate in a component recipe since the commands are run via /bin/sh by default which doesn't have the source command.Share"
"Hi everyone,I am looking for information on how to disable or revoke the certificate of a thing in IoT Core and I have only found that it can be done from its API.Is it possible to modify the state of a thing from the java sdk?How can I, from java, communicate with the IoT Core API?Regards.FollowComment"
IoT Core API with Java
https://repost.aws/questions/QU3tCGSxxaTDiTr0ap2-cmHg/iot-core-api-with-java
false
"0Hi. In general, for any given API operation, there's a matching CLI operation and matching methods in each of our SDKs. For example, for the IoT UpdateCertificate API operation:CLI: https://docs.aws.amazon.com/cli/latest/reference/iot/update-certificate.htmlPython boto3: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iot.html#IoT.Client.update_certificateJava: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/iot/AWSIotClient.html#updateCertificate-com.amazonaws.services.iot.model.UpdateCertificateRequest-JS: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Iot.html#updateCertificate-propertyAnd so onYou can see all of the Java control plane methods here: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/iot/AWSIot.htmlDeveloper guide: https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/home.htmlCommentShareEXPERTGreg_Banswered a year ago"
"few days back my instance have got an abusive attack, I have taken security measures, but this time again it happend. also i don't know how automatically ssh-key removed from authorized_key file, and another key have taken place, each time my instance showing "server key refused" when trying to ssh into it using putty.any help will be appreciated.FollowComment"
my ec2 (ubuntu-20.04) instnace having abusive attack in an intermittent manner
https://repost.aws/questions/QU_kUvb3R-QCqXC5PfxdvYGQ/my-ec2-ubuntu-20-04-instnace-having-abusive-attack-in-an-intermittent-manner
false
"0A security best practice is to limit the CIDR that you can access port 22 on. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.htmlAlternatively, and current best practice, you can use Systems Manager Session Manager, where access is managed through IAM: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.htmlCommentShareRodney Lesteranswered a year ago"
"I am using awslogs to push the logs to cloudwatch log groups, how ever it is pushing everything. Is there a way where i push only error and warning messages from awslogs conf file?FollowComment"
how to push only error and warning logs to cloudwatch
https://repost.aws/questions/QUxA2GPYnlS6O3RgRcOTcsbQ/how-to-push-only-error-and-warning-logs-to-cloudwatch
false
"1If you don't have large amounts of log data, you may simply continue sending all data and filter it directly in CloudWatchs, for example via Logs Insights. This has the advantage that you don't lose any of your log data (such as Info or Debug messages) and still can query for relevant information at a later point, for example for troubleshooting.If you want to filter data at the source, you should look at alternative log drivers, for example Firelens. Firelens can also send logs to CloudWatch, but supports more advanced features such as using regular expressions.CommentShareDanielanswered 10 months ago0Hello,The awslogs agent is deprecated and will not receive any new updates. While you can continue to use it, we strongly recommend you to upgrading to the new unified CloudWatch agent.[1]In the new unified CloudWatch agent, you can consider the filter field in the logs section[2] of the CW Agent configuration to exclude logs matching a certain criteria.From [2],For example, the following excerpt of the CloudWatch agent configuration file publishes logs that are PUT and POST requests to CloudWatch Logs, but excluding logs that come from Firefox."collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/test.log", "log_group_name": "test.log", "log_stream_name": "test.log", "filters": [ { "type": "exclude", "expression": "Firefox" }, { "type": "include", "expression": "P(UT|OST)" } ] }, .....][1] https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html[2] https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html#CloudWatch-Agent-Configuration-File-LogssectionCommentShareSUPPORT ENGINEERAshish_Kanswered 4 months ago"
"A customer wants to move its IPV installation (with data) from on-prem to AWS.The storage layer on premise relies on their own object storage solution which has an S3 compatible API (https://docs.ceph.com/en/latest/radosgw/s3/). Now the customer wants to move 70 TB of content to AWS together with the whole IPV suite.While the compute related part has been solved (they have been in contact with IPV and sized accordingly to their needs) the migration of the data is still open. We have discussed both DataSync and SnowBall.DataSync can support on prem object store as source and keep the metadata intact and it could work for a full migration and eventually scheduled syncs until the cut-over happens, but moving 70TB takes 8-9 days with a dedicated 1Gbit/s connection. I assume he also needs to purchase a Direct Connect too to ensure this expected speed.My preferred option for this customer however would be to use SnowBall for the initial heavy bulky migration and then use DataSync to keep the data in sync later on. Does SnowBall allow to copy the on-prem object store with their metadata and move them to S3?FollowComment"
Migrate on-prem object storage to S3 - Snowball + DataSync
https://repost.aws/questions/QUzYwmeO7mR7SCRwkzFOrwHA/migrate-on-prem-object-storage-to-s3-snowball-datasync
true
"0Accepted AnswerWith regards to moving data from an on-prem Ceph to AWS, the following are some considerations:(1) Which version / release of Ceph is this?(2) Are you trying to move data from rbd, rgw, and cephfs or just rgw?(3) You can certainly use DataSync to transfer data from rgw but having a reasonably decent connection to move 70 TB would be required if you have a timeline that needs to be met.(4) You can use Snowball Edge to transfer data from Ceph onto Snowball for the bulk migration, but it will not be able to retain the metadata if you use S3 for the data transfer (you can retain POSIX metadata if you use the File Interface/NFS though that will be quite a lot slower than using the S3 endpoint); with this method, you can use DataSync after the data has been imported into S3 to synchronize the metadata and any updates. Note: S3 on Snowball Edge supports a subset of the S3 API/CLI, so this would need to be tested.CommentShareMODERATORAWS-User-2697939answered 3 years ago"
"Hi guys.Help me please.I have a task, I need to create a policy with these permissions{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:CreateRole", "iam:PutRolePolicy", "lambda:CreateFunction", "lambda:InvokeAsync", "lambda:InvokeFunction", "iam:PassRole" ], "Resource": [ "*" ] } ]}But when I specify "Resource": [ "*" ] then I see the messagePassRole With Star In Resource: Using the iam:PassRole action with wildcards (*) in the resource can be overly permissive because it allows iam:PassRole permissions on multiple resources. We recommend that you specify resource ARNs or add the iam:PassedToService condition key to your statement. Learn more My task is that I could bind the "Resource" to the user account and preferably to the region in which it works. But no matter how many options I try JSON, I get an errorMy JSON code looks like this.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:CreateRole", "iam:PutRolePolicy", "iam:PassRole" ], "Resource":[ "arn:aws:iam::123456789012:/*" ] }, { "Effect": "Allow", "Action": [ "lambda:CreateFunction", "lambda:InvokeAsync", "lambda:InvokeFunction", "lambda:UpdateAlias", "lambda:CreateAlias", "lambda:GetFunctionConfiguration", "lambda:AddPermission", "lambda:UpdateFunctionCode" ], "Resource": [ "arn:aws:lambda:eu-central-1::123456789012:user/xxxx*", "arn:aws:lambda:us-west-2::123456789012:user/xxxx*" ] }]}I have tried many different options but can't get the result I want.Help me please.FollowComment"
Creating a policy for Apache Kafka (MSK)
https://repost.aws/questions/QUs7f8agM5S1Wopfge7Z1ihQ/creating-a-policy-for-apache-kafka-msk
false
"1Have you tried using Visual editor to help you with this? It guides you with policy creation.The policy you've provided allows all actions you listed on all resources (*) with an "Allow" effect. This is likely too permissive and could pose a security risk.You should specify the specific resources and actions that are needed for your use case. For instance, you can specify a specific Amazon Resource Name (ARN) for the resource.You should also consider adding a condition to the policy. This will allow you to further restrict access to the resources and actions. For example, you can restrict access to a specific IP address, user, or time of day.Here is an example policy that is more restrictive:{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "iam:CreateRole", "iam:PutRolePolicy", "lambda:CreateFunction", "lambda:InvokeAsync", "lambda:InvokeFunction", "iam:PassRole" ], "Resource":[ "arn:aws:iam::123456789012:role/lambda-execution-role" ], "Condition":{ "IpAddress":{ "aws:SourceIp":"192.0.2.0/24" } } } ]}The above example policy is just an example, you should use values that match your use case.CommentShareNikoanswered 4 months agorePost-User-8411867 4 months agoThank you very much for your help, I already figured it out and found this policy in the AWS documentationShare"
"I launched my exam on the exam day for AWS Solution architect associate - however it shows"The check-in window for this exam is now closed."I was not aware I could not reschedule less than 24hour before the exam, which I had done previously, but the system did allow me to reschedule.I was wondering could this have been the case, why I could not take the exam?Additionally, what can I do now? As the system should not have allowed me to re-schedule.Thank youFollowComment"
AWS exam shows "The check-in window for this exam is now closed."
https://repost.aws/questions/QUlnB32bFZSnurVeP-DHb32w/aws-exam-shows-the-check-in-window-for-this-exam-is-now-closed
false
0Please contact support here: https://www.aws.training/SupportCommentShareEXPERTChris_Ganswered 9 months ago
My DMARC tool (easyDMARC) says that our sending domain is different from the return-path domain (amazonses):"SPF failed since From domain XYZ.COM does not align with Return-Path domain eu-central-1.amazonses.com."This seems to be regarded bad for deliveriability.Can this issue be resolved?Thank you.KlausFollowComment
Can I configure the return-path in AWS SES
https://repost.aws/questions/QUAG_CNY5ZRBefbRdcwey6ug/can-i-configure-the-return-path-in-aws-ses
false
"0Hello,In order to verify and pass SPF, you need to setup Custom Mail from for the same verified domain. When an email is sent, it has two addresses that indicate its source: a From address that's displayed to the message recipient, and a MAIL FROM address that indicates where the message originated. The MAIL FROM address is sometimes called the envelope sender, envelope from, bounce address, or Return Path address. Mail servers use the MAIL FROM address to return bounce messages and other error notifications. The MAIL FROM address is usually only viewable by recipients if they view the source code for the message.https://docs.aws.amazon.com/ses/latest/dg/mail-from.htmlFor SPF record, please confirm that you add as shown in example below:"v=spf1 include:amazonses.com ~all""v=spf1 include:example.com include:amazonses.com ~all"https://docs.aws.amazon.com/ses/latest/dg/send-email-authentication-spf.htmlLet me know if you have any questions.CommentShareSUPPORT ENGINEERAjinkya B-AWSanswered a year ago"
"We have a shared Organization and would like to provide member accounts in an Organization to self manage SCPs on OUs where their accounts are located.We want to know if it is possible to do the following:An organization has OU-A, OU-B and OU-C etc.An account in OU-B wants to use IAM-User-B to create an SCP in the Management account and assign to OU-BIAM-USer-B must be have the ability to create/modify/delete SCPs in Organizations in the Management account, but can ONLY assign the SCP to OU-B.Any attempt to assign an SCP to OU-A or OU-C will be denied.Auditing is in place and a notification is triggered of any invalid attempt by IAM-User-BThe same principle must be applied to users in OU-A and OU-C.Any help is appreciated.ThanksFollowComment"
Delegate SCP administration of specific OU to IAM role of a member account
https://repost.aws/questions/QU1TFjtseWQOW9D6yqWjIGdg/delegate-scp-administration-of-specific-ou-to-iam-role-of-a-member-account
false
"1Well, if I understand you correctly, when you assign the policy to the user or role the user assumes (do not use users please, use always temp credentials so assume roles), what you can define on that policy is with resource, so you limit the permissions you grant in the policy to that specific resource, here is the idea:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "organizations:Describe*", "organizations:List*" ], "Resource": "*" }, { "Effect": "Allow", "Action": "organizations:AttachPolicy", "Resource": "arn:aws:organizations::<masterAccountId>:ou/o-<organizationId>/ou-<organizationalUnitId>" } ]}As you can see on the Resource line, you can restrict the OU in the resource line, to the attach policy permission. Hope this helps to build the desired policy, here is the documentation:https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_attach.htmlYou can also play with some global conditions and ResourceOrgPaths here: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.htmlbest.CommentShareJuanEn_Ganswered 7 months agorePost-User-0529671 7 months agoThanks for you r reply @JuanEn_GI can see how that would restrict attachment, but what would they need to allow the IAM role to create/amend/delete SCPs in Organizations?Share"
Is it possible to change an old t2.micro instance to the new t3.micro instance type? What are the necessary steps to follow? What should I take care?FollowComment
Convert a t2 instance to t3
https://repost.aws/questions/QUG_Q2Tu0cQemkhTMoISVLtQ/convert-a-t2-instance-to-t3
false
"0It seems that ENA should be enabled in the old instance: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.htmlAny comments?CommentSharefedkadanswered 5 years ago0Fixed the problem. I had to use UUID format in fstab in my case.Follow https://github.com/awslabs/aws-support-tools/tree/master/EC2/C5M5InstanceCheckshttps://aws.amazon.com/premiumsupport/knowledge-center/boot-error-linux-m5-c5/Edited by: Bali on Aug 24, 2018 2:33 PMEdited by: Bali on Aug 24, 2018 2:34 PMCommentShareBalianswered 5 years ago0even enabling ENA on the old instance doesn't let me change the instance type to t3, nor launch from an AMI \[that also has ENA supported]https://forums.aws.amazon.com/thread.jspa?threadID=290005if you got this to work, please let me know howEdited by: finderful on Sep 17, 2018 7:54 PMCommentSharefinderfulanswered 5 years ago0I haven't done any further tests on this subject. It was just a theoretical question. More detailed AWS documentation may be needed on this topic.CommentSharefedkadanswered 5 years ago0I've gotten stuck in the same issue.Although ENA is installed on my T2 instance, I can not select T3 instance when lunching from the AMI that created from the T2 instance.The results below belongs to my T2 istance:ubuntu@ip-333-33-33-333:~$ modinfo enafilename: /lib/modules/4.15.0-45-generic/kernel/drivers/net/ethernet/amazon/ena/ena.koversion: 2.0.2Klicense: GPLdescription: Elastic Network Adapter (ENA)author: Amazon.com, Inc. or its affiliatessrcversion: D8DA28B2F4C946755883EE8alias: pci:v00001D0Fd0000EC21sv*sd*bc*sc*i*alias: pci:v00001D0Fd0000EC20sv*sd*bc*sc*i*alias: pci:v00001D0Fd00001EC2sv*sd*bc*sc*i*alias: pci:v00001D0Fd00000EC2sv*sd*bc*sc*i*depends: retpoline: Yintree: Yname: enavermagic: 4.15.0-45-generic SMP mod_unload signat: PKCS#7signer: sig_key: sig_hashalgo: md4parm: debug:Debug level (0=none,...,16=all) (int)CommentShareefkantanswered 4 years ago0In the end, I could manage by running the command ofaws ec2 modify-instance-attribute --instance-id i-c7746f20 --ena-supportCommentShareefkantanswered 4 years ago0This worked for me - after making sure the instance was in a stopped stateCommentSharedanimalityanswered 4 years ago"
We have a usecase to read Oracle table and publish the records into AWS MSK topic. For that purpose we are using MSKConnect and trying to deploy Confluent JDBCSourceConnector. We are using AWS Glue schema registry for schema management. We have used below configuration in our connector but its just giving the error and connector always goes to failed status. key.converter= org.apache.kafka.connect.storage.StringConverter key.converter.schemas.enable= false key.converter.avroRecordType= GENERIC_RECORD key.converter.region= us-east-1 key.converter.registry.name= ebx-control-tbl-registryE key.converter.schemaAutoRegistrationEnabled= true value.converter= com.amazonaws.services.schemaregistry.kafkaconnect.AWSKafkaAvroConverter value.converter.schemas.enable= true value.converter.avroRecordType= GENERIC_RECORD value.converter.region= us-east-1 value.converter.registry.name= ebx-control-tbl-registry value.converter.schemaAutoRegistrationEnabled= trueIts giving below error.[Worker-0dc06f886ba9272ef] Caused by: org.apache.kafka.connect.errors.DataException: Converting Kafka Connect data to byte[] failed due to serialization error:Has anyone successfully used Confluent JDBCSourceConnector with MSK connect and AWS Glue Schema registry?FollowComment
AWS Glue Schema Registry and MSK Connect Integration for AVRO Schema
https://repost.aws/questions/QUF6gnAQ5xT12K_bqDp-dIPw/aws-glue-schema-registry-and-msk-connect-integration-for-avro-schema
false
"0Hello,I am able to use the Confluent Kafka jdbc connect with MSK and integrated with Glue schema registry(GSR) with the below steps. Posting the steps here, in case if it helps Note: I am using mysql as my source instead of OracleCollect the below jarsBuild GSR avro schema converter jarwget https://github.com/awslabs/aws-glue-schema-registry/archive/refs/tags/v1.1.8.zipunzip v1.1.8.zipcd aws-glue-schema-registrymvn clean installmvn dependency:copy-dependenciesA jar file with name schema-registry-kafkaconnect-converter-1.1.8.jar gets created in the directory avro-kafkaconnect-converter/target/Downloaded the mysql connector jar mysql-connector-java-8.0.29.jar from Mysql site. You may need to download Oracle jar hereGet the Kafka jdbc connect jar kafka-connect-jdbc-10.4.1.jar from https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc?_ga=2.120273672.1435912287.1652838995-1650791811.1631804226I zipped all the above 3 jars and uploaded the zip file into an s3 bucketI created an MSK custom plugin using the above file in s3 bucketI created a simple MSK cluster(without any authentication) in the private subnets of my VPC which has a route to internet via NAT gatewayI created a topic with the same name as the mysql tableI created an MSK connector from the plugin created in (2) with the config like belowconnector.class=io.confluent.connect.jdbc.JdbcSourceConnectorconnection.url=jdbc:mysql://myip:3306/mydbconnection.user=XXXXXconnection.password=XXXXtable.whitelist=mytbltasks.max=5mode=bulkkey.converter= org.apache.kafka.connect.storage.StringConverterkey.converter.schemas.enable= truekey.converter.avroRecordType=GENERIC_RECORDkey.converter.region=us-east-1key.converter.registry.name=testregistrykey.converter.schemaAutoRegistrationEnabled=truevalue.converter= com.amazonaws.services.schemaregistry.kafkaconnect.AWSKafkaAvroConvertervalue.converter.schemas.enable=truevalue.converter.avroRecordType=GENERIC_RECORDvalue.converter.region=us-east-1value.converter.registry.name=testregistryvalue.converter.schemaAutoRegistrationEnabled= trueRef linkshttps://docs.confluent.io/kafka-connect-jdbc/current/source-connector/source_config_options.html#jdbc-source-configshttps://docs.confluent.io/platform/current/schema-registry/connect.htmlhttps://aws.amazon.com/blogs/big-data/evolve-json-schemas-in-amazon-msk-and-amazon-kinesis-data-streams-with-the-aws-glue-schema-registry/After completing all of the above steps, the MSK JDBC connect is able to extract the table and push the rows into the MSK topic.CommentShareSUPPORT ENGINEERChiranjeevi_Nanswered a year ago"
I know DAX exists but it requires special clients to communicate with. These clients are not available/integrated with COTS packages. The ElastiCache (either memcache or redis) interface has wide support amongst COTS software. The route would be DynamoDB --> DMS --> ElastiCache. This way DynamoDB could be integrated with many software packages that support a K/V store.Therefore my request is whether you're willing to support DynamoDB as a source system for DMS?FollowComment
Support for DynamoDB as source system
https://repost.aws/questions/QUFvn0Gk4IRZa60v96l_kCjA/support-for-dynamodb-as-source-system
false
"1Curious as to why you select your path as DynamoDB -> DMS -> Elasticache ?For example, if you are looking to use DynamoDB as a source of truth and want to read from Elasticache and want near real time ongoing replication, then you can use:DynamoDB -> DynamoDB CDC -> Lambda -> ElasticacheDynamoDB StreamsIf your use-case is a one time dump of DynamoDB to Elasticache, then you can export your DynamoDB table to S3, and use S3 as a source for DMS.Export to S3Furthermore, if you are looking for a persistent KV store for you Elasticache workloads, have you considered MemoryDB which now supports JSON workloads?MemoryDBCommentShareEXPERTLeeroy Hannigananswered a year ago0internally there is a feature request to support DDB as source for DMS but again service team will take its own time to release the feature. Once generally available it will be stated on this pagehttps://docs.aws.amazon.com/dms/latest/userguide/WhatsNew.htmlCommentShareAWS-User-8174525-subhashanswered a year ago"
"On Dec 1, 2021, AWS put out a press release regarding SageMaker Ground Truth Plus that contained the statement:To get started, customerssimply point Amazon SageMaker Ground Truth Plus to their data source inAmazon Simple Storage Service (Amazon S3) and provide their specificlabeling requirements (e.g. instructions for how medical experts shouldlabel anomalies in radiology images of lungs).Can AWS provide medical experts for labeling medical data? Or am I misinterpreting this statement and the services included in this "turnkey" solution. (BTW, I've already built and tested a custom segmentation task for SageMaker Ground Truth and am looking for "expert" labeling.)FollowComment"
AWS Ground Truth Plus Available Medical Expertise
https://repost.aws/questions/QU1vINii7DRkGK-4klJOq7wA/aws-ground-truth-plus-available-medical-expertise
false
"0That's a poor choice of example on their part; the FAQ for the service explicitly states: "Currently, Amazon SageMaker Ground Truth Plus is not a HIPAA eligible service."CommentShareQuinnyPiganswered 6 months agob33fcafe 6 months agoTrue. However, there is plenty of medical data not covered by HIPAA, as is the case with the data I wish to have processed. Do you know if they have medical experts available for labeling / segmentation tasks?Share"
"I want to enable IAM Identity Center and configure an external IdP for an existing AWS account. This AWS account already has users, created with IAM.What happens to these users?I'm especially worried about users used by my application to, for example, access S3 buckets. They have no password but only an access key and secret. Will these users' keys work after the configuration of the external IdP?ThanksFollowComment"
What happens to existing AWS IAM users when enable IAM Identity Center?
https://repost.aws/questions/QUhofleOTNR16Q8LBFCXiuCA/what-happens-to-existing-aws-iam-users-when-enable-iam-identity-center
false
"2Hi ThereNothing will happen to the existing IAM users and access keys when you deploy IAM Identity Center and federate with an external IdP. THey can co-exist.See https://repost.aws/questions/QUfNomVCt5TCiac7oQoT8n0A/can-i-keep-existing-iam-users-and-add-sso-to-our-accountsCommentShareEXPERTMatt-Banswered 3 months ago0Hi Matt, thank you for the answer.So my application will still work even after the IdP configuration, great.But in the Review and confirm step of the configuration, I saw this point:IAM Identity Center preserves your current users and groups, and their assignments. However, only users who have usernames that match the usernames in your identity provider (IdP) can authenticate.This affect only users with a password?Thank youCommentShareGigitsuanswered 3 months agoDon Edwards 3 months agoThat message only applies if you already have users and groups defined within IAM Identity Center's native user store. It is not talking about IAM users and groups.https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-identity-source-considerations.html#:~:text=AD%20directory.-,Changing%20between%20IAM%20Identity%20Center%20and%20an%20external%20identity%20provider%20(IdP),-If%20you%20changeShare"
"Hello all,Sometimes when our usersd dial Amazon Connect Instance inbound number, they cannot connect to the system. They get a busy sound.I checked quotas, but we are not even close to the numbers. (Still in test phase) What can be the reason?ThanksFollowComment"
AWS Connect Busy on Inbound Calls
https://repost.aws/questions/QUJ0T34ptRQIuSvdpmQ8b5KQ/aws-connect-busy-on-inbound-calls
false
"1In the AWS console look for Service Quotas, then Amazon Connect. You want to look at concurrent calls per instance. The default is 10, but I've seen where even though it says 10 that it's actually 2 and you need to put in a request to have 10 or some higher number be configured by AWS.CommentSharedmaciasanswered 9 months ago0For me they increased the limit and yet I'm still getting busy on my configured number.My instance is in US-east-2 btw if that helps... Also this is not the first time I'm configuring numbers... It used to work perfectly for a few months and I decided to terminate all my old instances..recently when I created new instance I'm getting this problem!CommentShareJulian Frankanswered 2 months agodmacias 2 months agoFirst, I would ask AWS to validate that they did increase your quota. I've seen plenty of times where they say they did, but didn't. Next, is the failure consistently on multiple calls to the same number?ShareJulian Frank 2 months agoI keep creating and destroying instances...At one point I started getting message saying I could create new number only if I increase quota limit. so I raised a request and got the quota increased and now able to create new number. But now Calls are not reaching this number nor am I able to do outbound from this number. Calls just disconnect after a "busy tone"Share0You're going to have to break this down further. Does Connect own the number or are you forwarding the calls? Do you see anything in CW? Are you sure sure sure it's not a quota issue? I've seen an issue where even when the Connect instance says 10 calls that it fails after 2 calls.davidCommentSharedmaciasanswered 10 months ago0Hello,Thank you for response. We are using the number in Connect. We are not forwarding calls. I checked Cloudwatch but did not see any issues. Which metric should I check specifically?ThanksCommentSharebbolekanswered 9 months ago"
When sharing files and folders from WorkDocs I am not able to set an expiration date (it appears a default one is used).FollowComment
Workdocs link options (expiration) are not displayed
https://repost.aws/questions/QUJo8Mp_bPSTaquPKRLyosXA/workdocs-link-options-expiration-are-not-displayed
false
"0Hi there,When you use the Share a Link feature to share a file, you can set the expiration of the link. When sharing a folder, the link expiration is not supported.Thank youCommentSharekamlauanswered 9 months agorePost-User-2369786 9 months agoIt appears that there is a default expiration for folder links. Is there a way to change it?Share"
"What is the difference between using Chef and/or Puppet to administer applications and infrastructure on AWS versus containerization with ECS and EKS.I'm pretty familiar with ECS and somewhat familiar with EKS, but I have zero knowledge of Chef and Puppet outside of knowing their names and reading for about 5 minutes on their respective websites.Would someone please provide a simple, concise explanation of the differences between Chef/Puppet and ECS/EKS, along with example usages of Chef/Puppet vs. containerization?FollowComment"
Chef & Puppet vs. ECS / EKS
https://repost.aws/questions/QUdUpn9IS-RX6FVNzw4O7aSw/chef-puppet-vs-ecs-eks
true
"2Accepted AnswerThese are different things. Chef/Puppet are configuration management systems used to build/manage/deploy applications and infrastructure including container images and containers. ECS/EKS are container platforms where you would run/schedule your container images to be executed and provide a service/function. So as a very basic example specific to containers,Build container images: Chef, Puppet, Ansible, Salt, etc.Deploy container images: Chef, Puppet, Ansible, Salt, etc.Run and schedule container: ECS, EKS, AKS, GKE, PKS, OpenShift, etc.CommentSharecloudy2024answered 3 years ago0As mentioned by another user: Chef and Puppet are Configuration Management tools, while ECS/EKS are Container Orchestration tools.You can create a Chef recipe to, for example, setup and configure an ECS Task Definition, but not the other way aroundCommentShareEduardo_Banswered a year ago"
We have an API that reads the data from DynamoDB by using PartiQL query in ExecuteStatementWithContext function. This API is paginated and i rely on NextToken for reading all DynamoDB pages.The problem is that there is an expiry for NextToken and i am getting this error ValidationException: Given NextToken has already expired. Can we increase this expiry in any way?FollowComment
How to increase expiration of NextToken in DynamoDB ExecuteStatement?
https://repost.aws/questions/QUO-3_zqSnR6O3gMv3ySrxSA/how-to-increase-expiration-of-nexttoken-in-dynamodb-executestatement
false
"1PartiQL's NextToken has a validity of 1 hour, which the token then expires. There is no way to increase this using the PartiQL API. However, re-writing your code to suit DynamoDB Vanilla API will allow you to retrieve a LastEvaluatedKey which does not expire and may be better suited for your use-case.CommentShareEXPERTLeeroy Hannigananswered a year agorePost-User-7463373 a year agoThanks for the answer @Leeroy Hannigan.We cannot use Vanilla API because we need to query multiple partition keys in the single query. This is possible currently using PartiQL only.It would be of great benefit if LastEvaluatedKey support is present for PartiQL. I can see there is already a question asked couple of months back(https://repost.aws/questions/QUgNPbBYWiRoOlMsJv-XzrWg/how-to-use-last-evaluated-key-in-execute-statement-request). Is there any ETA on this?ShareLeeroy Hannigan EXPERTa year agoUsing PartiQL as a workaround for a Batch Query is only saving you time on network latency, each Query on the back-end is executed sequentially. So if you use the vanilla API and multi-thread individual Query operations, you will see very little difference in latency and have the advantage of using LastEvaluatedKey.On the ETA of LEK being available for PartiQL, we never provide ETA's for any of our feature releases, but you can keep up to date with the newly released features here: https://aws.amazon.com/new/Share"
"I am trying to create a Custom Visual transform but unfortunately facing some issues.Here my motive it to truncate a MySQL table before loading the data into it and I want to do it with the help of Visual Tranforms not by changing the auto generated script.My script is running continueously with the same log:23/05/14 04:25:00 INFO MultipartUploadOutputStream: close closed:false s3://aws-glue-assets-849950158560-ap-south-1/sparkHistoryLogs/spark-application-1684037765713.inprogressHowever removing all the code except this code is working:from awsglue import DynamicFramedef truncate_mysql_table(self, database_name, table_name, connection_name): return self.filter(lambda row: row['age'] == '21')DynamicFrame.truncate_mysql_table = truncate_mysql_tableThis the code I am using:import pymysqlimport boto3import jsonfrom awsglue import DynamicFramedef truncate_mysql_table(self, database_name, table_name, connection_name): client = boto3.client('glue') response = client.get_connection(Name=connection_name, HidePassword=False) connection_props = response.get("Connection").get("ConnectionProperties") host_name = connection_props.get("JDBC_CONNECTION_URL").rsplit(":", 1)[0].split("//")[1] port = connection_props.get("JDBC_CONNECTION_URL").rsplit(":", 1)[1].split("/", 1)[0] secret_id = connection_props.get("SECRET_ID") client = boto3.client('secretsmanager') response = client.get_secret_value(SecretId=secret_id) secret_data = json.loads(response.get("SecretString")) username = secret_data.get("username") password = secret_data.get("password") con = pymysql.connect(host=host_name, user=username, passwd=password, db=database_name, port=port, connect_timeout=60) with con.cursor() as cur: cur.execute(f"TRUNCATE TABLE {database_name.strip()}.{table_name.strip()}") con.commit() con.close() # print("Table Truncated") return self.filter(lambda row: row['age'] == '21')DynamicFrame.truncate_mysql_table = truncate_mysql_tableMy Glue Connection and MySQL RDS is in the same VPC also I am having VPC endpoints for s3 and secret manager.This shouldn't be a problem becuase after changing (or simplifying) the code it is giving the expected output.Please help.FollowComment"
Glue Custom Visual Script Running indefinitely
https://repost.aws/questions/QUK1gC_kvITT-iFidULD9v_A/glue-custom-visual-script-running-indefinitely
false
"0Put print statements to see exactly where it gets stuck, did it open the connection ok?, was it on the execute or later?In the connection is better if you do conn.autocommit(True), no reason to hold a transaction for that.Also, you should make sure the connection is closed in a finally block.My guess is that the it cannot open the connection, the way you are parsing the JDBC looks fragile to me. Maybe consider using the JDBC driver with Py4J instead of having to parse the url and needing an extra library.CommentShareGonzalo Herrerosanswered 13 days ago"
"I have to install a specific version of openjdk i.e, openjdk-11.0.16 in amazon linux 2.I don't want to install coretto11. we can install using amazon-linux-extras install java-openjdk11, but this approach always installs the latest version and not the specific version I require.How to achieve this?FollowComment"
How to install openjdk-11.0.16 version in amazon linux 2
https://repost.aws/questions/QUY8xB-CpKRWWmk-eCzpn0TQ/how-to-install-openjdk-11-0-16-version-in-amazon-linux-2
false
"0HI,I used this step install openjdk-11-0.16 version in amz 2wget https://builds.openlogic.com/downloadJDK/openlogic-openjdk-jre/11.0.16+8/openlogic-openjdk-jre-11.0.16+8-linux-x64.tar.gzmv openlogic-openjdk-jre-11.0.16+8-linux-x64 /usr/jdk-11.0.16# add this to your /etc/profileexport JAVA_HOME=/usr/jdk-11.0.16export PATH=$PATH:$JAVA_HOME/bin# then,you just source this envsource /etc/profile# check the java version[root@ip-172-31-23-10 ~]# java -versionopenjdk version "11.0.16" 2022-07-19OpenJDK Runtime Environment OpenLogic-OpenJDK (build 11.0.16+8-adhoc.root.jdk11u)OpenJDK 64-Bit Server VM OpenLogic-OpenJDK (build 11.0.16+8-adhoc.root.jdk11u, mixed mode)I hope this can help you,hava a nice day!CommentShareUniStars-CNanswered 6 months ago"
How to change domain contact when contact has left organization so can't 'approve' change because no one has access to their prior organizational email account.FollowComment
Route 53 - Change Domain Contact to Remove Prior Contact Not with Organization
https://repost.aws/questions/QU_o7mIeAWQKGZpqAcJVTZ4Q/route-53-change-domain-contact-to-remove-prior-contact-not-with-organization
false
"0Hi, one of the options I can think of is to redirect this person's email address on the mail server to another mailbox. Second option is to reach out to AWS Support following this guide - https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-contact-support.htmlCommentShareJaroslav-AWSanswered 2 months ago"
"Hi,I'm trying to evaluate if lumberyard fits my needs, and I'm very interested in the c++ side, especially the memory management of this engine. This is the only page I've found in the documentation and, to be honest, is quite shallow and lacks any technical details.I would like to know how this engine handles memory allocations in c++, does it have any allocators/memory pools ready to use? Is every allocation/deallocation in the hands of the programmer or it mimics Unreal with its awful, slow and horrible Garbage Collector attached to c++ like a parasite?Thanks for any help or direction you can provide :)Follow"
Lumberyard memory management
https://repost.aws/questions/QUUNgz6pV7T9GPt4ZCDrpjpw/lumberyard-memory-management
true
"0Accepted AnswerI'm not a Lumberyard dev and I'm not too familiar with how Unreal handled memory allocations but I believe the answer is that Lumberyard does offer a lot of control over C++ memory allocations. I agree there's not a lot of information on how different types of allocators work though even in the actual code. I haven't tested the performance of the Lumberyard memory allocation system either but in theory it looks like it should be much faster and work well with multiple threads. Lumberyard does have a few default allocators that can be specified but in a multi-threaded environment you may prefer having more allocators to reduce the chance of synchronization conflicts. The pool allocators in Lumberyard are probably much faster than a generic Garbage Collector due to reduced synchronization issues but I'm guessing the memory usage is higher than a generic Garbage Collector as a result. There is some garbage collection functionality in the Lumberyard allocators based on what I've seen in the code.If you look in /dev/Code/Framework/AzCore/AzCore/Memory/PoolAllocator.h you can get a general idea on how to create additional allocators, pool allocators for small objects in the example below. Every Class usually has an AZ_CLASS_ALLOCATOR macro which overloads all the new/delete calls to use a certain allocator. You also may want to use "aznew", which is recommended in the code somewhere, instead of "new" when doing allocations which includes some extra debugging information based on compile settings I think but I haven't looked into the specifics. I usually use my own pool allocator base class just to reduce some needless redundancy like providing a description for every unique allocator and to cut down on the name length of the class being inherited. The last InitPool function I added below is just a custom one I use to configure the allocator during creation since in my case I only used it to allocate a single class type so the size was always the same. I think 4096 was the max/default page size for pool allocators but you can pre-allocate a few pages as well if needed in the Create function which usually needs called in the System Component initialization or before the allocator would ever get used for obvious reasons. class YourNamespace::PoolAllocatorBase : public AZ::ThreadPoolBase<AZ::ThreadPoolAllocator>{public:const char* GetDescription() const override{return "Generic thread safe pool allocator for small objects";}};class YourNamespace::YourPoolAllocator : public YourNamespace::PoolAllocatorBase{public:AZ_TYPE_INFO(YourPoolAllocator, "{Insert Unique UUID Here}")AZ_CLASS_ALLOCATOR(YourPoolAllocator, AZ::SystemAllocator, 0)public:const char* GetName() const override{return "YourNamespace::YourPoolAllocator";}};class YourNamespace::YourClass{public:AZ_TYPE_INFO(YourClass, "{Insert Unique UUID Here}")AZ_CLASS_ALLOCATOR(YourClass, YourPoolAllocator, 0)...};template<typename PoolName, typename PoolData>void InitPool(AZ::u32 numPages = 0){PoolName::Descriptor cDesc;cDesc.m_pageSize = 4096;cDesc.m_numStaticPages = numPages;cDesc.m_minAllocationSize = sizeof(PoolData);cDesc.m_maxAllocationSize = sizeof(PoolData);AZ::AllocatorInstance<PoolName>::Create(cDesc);}SharerePost-User-9435498answered 5 years ago0thanks for the feedback, I'm looking forward to hear more :DSharerePost-User-3397397answered 5 years ago0Lumberyard memory management does employ a garbage collector that the allocators make use of. Generally, each class that makes use of an allocator has the ability to customize how it interacts through the usage of schemas such as the ones I detailed and potentially custom ones if you wanted to go that route. That'd be a way to handle more specific use cases you have in mind if what's available isn't quite right for you. In that sense, there is a degree of control/customization in memory management while staying in the bounds of the provided memory management. Personally, I recommend using the allocators since they were built with optimizing memory access in mind. You are certainly free to use other approaches however.SharerePost-User-6135477answered 5 years ago0Hey @REDACTEDUSERI have also submitted your perspective on documentation as well -- thank you for this note! It is greatly helpful for the teams working on improving such aspects :)SharerePost-User-5738838answered 5 years ago0This is a great and thorough analysis but I see that there's a common question on the types of allocators.You can find the definitions for most of the allocator types in dev\Code\Framework\AzCore\AzCore\Memory. Here's a quick breakdown of the schemas.Hpha is the default and is capable of handling small and large allocs alikeHeap is designed for multithreaded use and is consequently nedmalloc basedBestFit is used for heavy resource management and consequently bookkeeps outside of the managed memory. GPU resources are a good example for thisChild, as the name implies, lets you child an allocator to another. An example would be for tracking purposes on the parentPool uses a small block allocator expressly for small allocs. There's also ThreadPool which uses thread local storageI'd also recommend using aznew over new as it goes through the allocators. It's not required but certainly recommended.SharerePost-User-6135477answered 5 years ago0Epic has decided that in order to make c++ more accessible to a larger audience they needed to add a garbage collector to it. Long story short, every class that inherits from UObject is tracked by Unreal's garbage collector that runs every once in a while, unfortunately their c++ is tied to it and cannot be turned off and this can be a strong hit on performance.That's why I'm interested if Lumberyard uses a similar strategy. From your reply it seems to leave the control (of allocations, deallocations and memory management) in the hands of the programmer, am I right? if so, I really like it.SharerePost-User-3397397answered 5 years ago"
"Hi,It seems that whatever I do I can't get CodeBuild to build my Ruby project. I need Ruby version 2.7.4 so as always I added .ruby-version file in the root folder of my app with ruby-2.7.4 but the build fails with error:rbenv: version `ruby-2.7.4' is not installed I set ruby-2.6 in my .ruby-version but the build fails again with errorrbenv: version `ruby-2.6' is not installed I tried installing proper Ruby version at build so I set this command:pre_build: commands: - rbenv install 2.7.4but I get the same error:rbenv: version `ruby-2.6' is not installedWhat am I doing wrong? What am I missing?FollowCommentMichael_K a year agoWhich build image are you using? I was able to do this with aws/codebuild/amazonlinux2-x86_64-standard:3.0Sharejedrek a year agoI am using aws/codebuild/amazonlinux2-x86_64-standard:3.0When I set ruby: 2.7.4 I get an errorPhase context status code: YAML_FILE_ERROR Message: Unknown runtime version named '2.7.4' of ruby. This build image has the following versions: 2.6, 2.7Share"
Can't install Ruby on CodeBuild
https://repost.aws/questions/QUQUjIw0ADS1WS1Tr2FbVGTw/can-t-install-ruby-on-codebuild
false
"Hey,We have an existing account, we tried to add to control tower enrollment. It failed and compliance status is unknown.So tried to recover by deleting the account factory provisioned product and add the account back to Ou.But did not solve my problem, since I could not see the enroll option enabled, it is in disabled state.We have role created in new account, sts is enabled.Please guide me on how can I recover itFollowComment"
Unable to recovery from enrollment of existing account to control tower
https://repost.aws/questions/QUDJ6884K1SK6mC0uzBnHs-A/unable-to-recovery-from-enrollment-of-existing-account-to-control-tower
false
"0Hi ThereHave you tried moving the account to the root OU and then enrolling it?From https://docs.aws.amazon.com/controltower/latest/userguide/troubleshooting.html#enrollment-failedIn this case, you must take two recovery steps before you can proceed with enrolling your existing account. First, you must terminate the Account Factory provisioned product through the AWS Service Catalog console. Next, you must use the AWS Organizations console to manually move the account out of the OU and back to the root. After that is done, create the AWSControlTowerExecution role in the account, and then fill in the Enroll account form again. If that does not enable the Enroll button, then try creating a new OU, moving the account into that OU, and registering that OU. That will start the enrollment process again.CommentShareEXPERTMatt-Banswered 10 months ago"
what happens when the header node fails. will the cluster fails? and what is the RTP and RPO if it recovers.thanks.FollowComment
"about redshift HA, what happens when the header node fails."
https://repost.aws/questions/QUciaaAWKrRS2yl73-Fk5bYg/about-redshift-ha-what-happens-when-the-header-node-fails
true
"0Accepted AnswerAmazon Redshift will automatically detect and replace a failed node in your data warehouse cluster. As to RTO, it dependson many factors and can span anywhere between minutes to an hour. AWS does not promise an SLA.As to RPO, it depends on the size of your Redshift cluster. If this is a single-node cluster (which is NOT recommended for customer production use), there is only one copy of the data in the cluster. When it is down, AWS needs to restore the cluster from the most recent snapshot on S3 and that becomes your RPO. For a multi-node Redshift cluster, when a lead node is down, the cluster will be down. But once the failed lead node recovers, the cluster comes backup at exactly the same point as when it crashed (including all successfully committed transactions up to crash). So there will be no data loss. For those "in flight" transactions that were interrupted by the crash, you need to re-run them.CommentShareEXPERTAWS-User-2618129answered 5 years ago"
"My typical use is to sync a series of directories/subdirectories from S3 to a cifs mounted SMB share on a local Linux machine.After a recent local server reboot/remount of network storage, the sync command now re-transfers EVERYTHING in the directories every time I run it. My belief is that the sync command is pulling the CURRENT time as the local timestamp instead of the modify time of the files.I ran it with dryrun and debug, and got a series of syncstrategy statements that appear to show a comparison between the S3 and local file. If I'm reading this correctly, the filesize is the same, the S3 timestamps showing correctly, but the local file timestamp is showing the immediate current time.The local linux environment ls shows the correct modified timestamp, which matches the s3 ls of the same file.Here is example output from the debug. Note the "modify time:" section. I believe that this shows the correct modify time for the S3 files, but shows the time the command was run for the local files. (modified time: 2022-03-18 16:52:02-07:00 -> 2022-03-24 12:48:39.973111-07:00 <-- the command was run at this datetime, and it seems to climb with each file as the seconds tick by)2022-03-24 12:48:40,066 - MainThread - awscli.customizations.s3.syncstrategy.base - DEBUG - syncing: com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-10.mp3 -> /mnt/qnapprd_integration/AOD/FZAOD/album-VariousArtists_TimelessHits-10.mp3, size: 2827911 -> 2827911, modified time: 2022-03-18 16:52:02-07:00 -> 2022-03-24 12:48:39.973111-07:00(dryrun) download: s3://com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-10.mp3 to FZAOD/album-VariousArtists_TimelessHits-10.mp32022-03-24 12:48:40,066 - MainThread - awscli.customizations.s3.syncstrategy.base - DEBUG - syncing: com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-11.mp3 -> /mnt/qnapprd_integration/AOD/FZAOD/album-VariousArtists_TimelessHits-11.mp3, size: 3248378 -> 3248378, modified time: 2022-03-18 16:52:12-07:00 -> 2022-03-24 12:48:39.945111-07:00(dryrun) download: s3://com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-11.mp3 to FZAOD/album-VariousArtists_TimelessHits-11.mp32022-03-24 12:48:40,067 - MainThread - awscli.customizations.s3.syncstrategy.base - DEBUG - syncing: com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-12.mp3 -> /mnt/qnapprd_integration/AOD/FZAOD/album-VariousArtists_TimelessHits-12.mp3, size: 4518138 -> 4518138, modified time: 2022-03-18 16:52:12-07:00 -> 2022-03-24 12:48:39.981111-07:00(dryrun) download: s3://com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-12.mp3 to FZAOD/album-VariousArtists_TimelessHits-12.mp32022-03-24 12:48:40,067 - MainThread - awscli.customizations.s3.syncstrategy.base - DEBUG - syncing: com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-13.mp3 -> /mnt/qnapprd_integration/AOD/FZAOD/album-VariousArtists_TimelessHits-13.mp3, size: 8270994 -> 8270994, modified time: 2022-03-18 16:53:03-07:00 -> 2022-03-24 12:48:40.001111-07:00(dryrun) download: s3://com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-13.mp3 to FZAOD/album-VariousArtists_TimelessHits-13.mp32022-03-24 12:48:40,068 - MainThread - awscli.customizations.s3.syncstrategy.base - DEBUG - syncing: com.my.bucket.share/FZAOD/album-VariousArtists_TimelessHits-14.mp3 -> /mnt/qnapprd_integration/AOD/FZAOD/album-VariousArtists_TimelessHits-14.mp3, size: 5135882 -> 5135882, modified time: 2022-03-18 16:52:33-07:00 -> 2022-03-24 12:48:39.941111-07:00Does anyone have any insight into how this timestamp is pulled, or what might stop s3 sync from retrieving the correct modify time of local files?FollowComment"
aws s3 sync syncstrategy shows incorrect timestamp
https://repost.aws/questions/QUqa08v0MFQwSYUk5AP35aew/aws-s3-sync-syncstrategy-shows-incorrect-timestamp
true
"1Accepted AnswerTurns out this was a problem with the CIFS mount itself. Although all seems intact and correct, something is happening at mount that causes the s3 sync to not be able to correctly access the files, or at least some part of their properties. An unmount/remount of the share seems to correct the problem.CommentShared0rkfishanswered a year ago"
"We are trying to deploy a new site and ran into an issue that has been keeping us from moving forward.We have recently upgraded our site from Vue2 to Vue3 and are re-deploying it using Cognito for authentication and Amplify for our site hosting and back-end. In the older version of Amplify, you could define the fields coming in from Cognito, but in the newer version, it automatically generates a aws-exports.js file that does not include the custom attributes from cognito. (It does include all the other settings from our UserPool, but not the custom attributes.)Anyone know how to get these custom attributes over into Amplify? I assume it has something to do with the automatically generated cloud formation templates, but I could not figure it out.FollowComment"
How do you get custom Cognito Attributes into Amplify Back-End?
https://repost.aws/questions/QUcmC6Omt3QLiXTlks1KTxYQ/how-do-you-get-custom-cognito-attributes-into-amplify-back-end
false
"Hello,I am trying to query an s3 bucket which has data stored in folders organized by year, month, date and hour (s3://mybucket/2017/06/01/01) and has all the files in gz format. (I have replaced my bucket name and column name with dummy names)I created table using the following queryCREATE EXTERNAL TABLE IF NOT EXISTS testdb.test_tables (col1 string,col2 string,.....) PARTITIONED BY (year int,month int,date int,hour int)ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'WITH SERDEPROPERTIES ('serialization.format' = ',','field.delim' = ',') LOCATION 's3://mybucket/'TBLPROPERTIES ('has_encrypted_data'='false')This query runs successfully.After that I am running,MSCK REPAIR TABLE test_tablesIt returns values sayingPartitions not in metastore:test_tables:2017/05/14/00test_tables:2017/05/14/01test_tables:2017/05/14/02test_tables:2017/05/14/03test_tables:2017/05/14/04test_tables:2017/05/14/05test_tables:2017/05/14/06test_tables:2017/05/14/07test_tables:2017/05/14/08This doesnt seem right.And then when I run a basic queryshow partitions test_tables.It returns only "Query Successful" with nothing else.Also when I runselect * from test_tables limit 10;It returns nothingFollowComment"
Query in Athena partitioned data
https://repost.aws/questions/QUeFKLZ55wR5iaRTVNL_TfSw/query-in-athena-partitioned-data
false
"0It is happening because the partitions are not created properly.Partitioning can be done in two ways - Dynamic Partitioning and Static Partitioning.For dynamic partitioning, your folder structure should be of the form:s3://mybucket/year=2017/month=06/day=01/hour=01In this case, when you'll run "MSCK REPAIR TABLE test_tables" query after creating table, the partitions will be identified automatically.Since your folder structure is :s3://mybucket/2017/06/01/01You need to add partitions manually after creating table and before executing repair table query. This can be done using the following query:ALTER TABLE test_tables ADD PARTITION (year='2017',month='06',day='01',hour='01') location 's3://mybucket/2017/06/01/01'For more details refer -http://docs.aws.amazon.com/athena/latest/ug/partitions.htmlCommentSharekaru07answered 6 years ago0Correct. The MSCK repair table only works if your prefixes on S3 are in a key=value format. Else you need to manually add partitions. Also, if you are in US-East-1 you can also use Glue to automatically recognize schemas/partitions. See http://docs.aws.amazon.com/athena/latest/ug/glue-faq.htmlCommentShareAWS-User-0617186answered 6 years ago"
"Hello,when i am trying to send a Downlink to a lorawan device and want to check the message queue in the aws console IoTCore -> Wireless connectivity -> Devices -> specific device: i am receiving this error as a red notification error pop up after a few seconds.Value not valid1 validation error detected: Value 'null' at 'WirelessDeviceType' failed to satisfy constraint: Member must have non-null and nonempty valueAlthought i get no error message if i invoked the send data to device API from the CLI command line or from an lambda function using boto3. Also when i am using the console to queue a downlink i got a green conformation box but afterwars this error and nothing is displayed in the downlink queue section.All the devices send the data normally and apart from this issue everything seems to work just fine. I have no clue why i am getting this error and i found nothing in the documentation. Did anyone encountered the same problem and knows how to solve it ?Thanks for your help in advance.FollowComment"
LoRaWAN IoT Core Downlink / Send data to device / Value 'null' at 'WirelessDeviceType'
https://repost.aws/questions/QUwFvP84gdSR6v_LRXrzsRQQ/lorawan-iot-core-downlink-send-data-to-device-value-null-at-wirelessdevicetype
false
"0Hello. I've been experiencing this issue for about a week. I concur with your observations that it is mainly a nuisance, with little operational impact. Like you, I can't view the downlink queue; besides that, everything still works.I think this issue is for AWS to solve. It has been raised and escalated internally.CommentShareEXPERTGreg_Banswered a year agoGreg_B EXPERTa year agoThe fix has been deployed. Please let us know if you still experience problems.Share"
"Hi,We host our whole platform on Elastic Beanstalk and currently have a problem when our API is getting stressed and scaling out with the help of the Load Balancer and adding a new instance.It seems like when that happens that the load balancer is not trafficking any requests to the instances and just gives an error when trying to reach it. It takes around 5-10 minutes before we can use the API which is quite stressful.Is there anything we can do to minimize the down town with the load balancer scaling or are we doing something wrong?It might be important to note that our API is using a classic load balancer while our Frontend is using a application load balancer.thanks in advance.best regardsEdited by: codingcow on Jul 8, 2020 3:36 AMFollowComment"
Elastic Beanstalk Load Balancer few minutes down time when adding instance
https://repost.aws/questions/QUnNuyAapeRCa8SOXBnd1-YA/elastic-beanstalk-load-balancer-few-minutes-down-time-when-adding-instance
true
"0Accepted AnswerTake a look at the settings your load balancer is using to determine heathy/unhealthy. You can adjust these down to require fewer requests (or less time) to determine if an instance is healthy.Make sure the URL/endpoint for your health check is available (returns a 200 response) as soon as your app is ready to use.The Health tab in the beanstalk console will describe what is happening with the new instances, usually "Checking instance health". Minimizing the work you have to do when a new instance is started can make this shorter. Or using a larger instance size can help too since startup is rather CPU intensive. Or if you are downloading a bunch of stuff at startup (like yum/rpm packages), make sure these are located in the same AWS Region to ensure fast retrieval.CommentSharejohnthussanswered 3 years ago0Thanks for the answer, we would try to look into it when we are back on Monday.it just puzzles me that the current instance is not getting any requests. It makes sense that the new instance is building and would be available in a short time for the LB but why would the current instance not get any requests, the LB should know that the current instance is up and running, am I missing something?CommentSharecodingcowanswered 3 years ago"
"Hello. I would like to know do you have any validation for sender's phone number? As we see a lot of SMS that have not been send in the report, but in the code, we don't have error when phone is not valid, for exampleFollowComment"
SMS validation (SNS)
https://repost.aws/questions/QUczS6Qa3vSVa6S9-Uacg8qA/sms-validation-sns
false
0I believe you mean validating phone numbers before sending SMS? Correct me if I'm wrong. We have an API to validate phone number in one of our other service AWS Pinpoint - https://docs.aws.amazon.com/pinpoint/latest/developerguide/validate-phone-numbers.htmlCommentShareAWS-Sarathanswered 18 days ago
"I was reading about the standardization efforts around distributed tracing, in particular, this document:https://www.w3.org/TR/trace-context/While looking at AWS XRay's implementation, it seems to be going completely custom, with custom HTTP Headers, and custom formating of the hierarchy IDs.When is XRay going to switch to a more standardized format that is compatible with other systems that also implement that standard?I found this project when randomly searching for this topic:https://aws-otel.github.io/I don't quite understand what that is and why it even coexists with XRay.Would appreciate clarification here.FollowComment"
When is XRay going to support W3C Trace Context standard headers?
https://repost.aws/questions/QUy9_LtTJfSs2DWZpkeVSwrw/when-is-xray-going-to-support-w3c-trace-context-standard-headers
true
"0Accepted AnswerHi @julealgon,You're right that X-Ray uses custom id format that is different from the w3c format. This has been in the design since X-Ray's inception for some functional reasons. Since, the custom id format is deeply embedded within the X-Ray architecture, the change to the standardized format is very non-trivial and not on the roadmap for any time soon.That being said, X-Ray now does support traces generated by an OpenTelemetry SDKs that generate w3c format trace ids. You'll need to use the AWS Distro for OpenTelemetry Collector for sending the w3c format traces to X-Ray. More of it here: https://aws-otel.github.io/docs/getting-started/collector . There are more components available as part of AWS Distro for OpenTelemetry to let you use OpenTelemetry SDK with X-Ray service as the tracing backend. You don't need X-Ray SDKs for application in that case. You can find all the information about the AWS X-Ray OpenTelemetry support at https://aws-otel.github.io/Hope this clears up your confusion. Also, could you give us an overview of your use case of having w3c standard ids with X-Ray?CommentShareprashatawsanswered 2 years ago0Thanks for clarifying @prashataws.It's unfortunate to hear there are no current plans to make X-Ray standards-compliant.As for using aws-otel, as far as I understand it, it is currently not an option for us since we are using the .Net stack and that doesn't seem to be supported.Lastly, on the use-case question, there is no use-case per se, I just wanted to make our solution as future proof as possible and as decoupled from proprietary implementations as possible. The question came to mind after I started investigating X-Ray integration in one of our projects.CommentSharejulealgonanswered 2 years ago0Hey julealgon,Support for X-Ray for OpenTelemetry .NET is in our backlog and is currently under constructing, you may come back later to check.CommentShareaws-luanswered 2 years ago"
"I am in the process moving emails from SES to 365 and have to verify domain hosted by AWS. I created the necessary TXT, CNAME and MX records as required by Microsoft but Microsoft doesn't detect the TXT record for SPF. I created this record: "v=spf1 include:spf.protection.outlook.com -all"I hope someone can help.FollowComment"
TXT - 365 didn't detect the added TXT record for SPF
https://repost.aws/questions/QUMl6sdugrR22b2Vl4itRe2Q/txt-365-didn-t-detect-the-added-txt-record-for-spf
false
"0It could be a propagation issue OR you have already an existing TXT record in the zone for the same name. Wait up to 48 hours and or check with DNS propagation checkers out there. if you have another TXT record already, just modify the old TXT and add a new line for your TXT value. The DNS propagation checkers also with return the value of the TXT it gets from your name servers.CommentShareYoungCTO_quibskianswered a year ago"
"Hi,I've been working on the following workshop - https://catalog.us-east-1.prod.workshops.aws/workshops/b0c6ad36-0a4b-45d8-856b-8a64f0ac76bb/en-US/pre-requisites/12-own-aws-accountIn step 2 - Installing the Pre-requisites - you get an error regarding the Python version, even though the workshop says it installs 3.8. If you run python --version you get 3.7.10 on AL2. I tried to fix the issue by emoving the sym links and recreate them using 'sudo ln -s /usr/bin/python3.8 /usr/bin/python3' but that broke other things like YUM.Apologize, if this is not the right forum for feedback and errors.Anyone have suggestions on these workshops and getting them up and going?FollowCommentiwasa EXPERTa year agooh, I also tried it, but it was Python 3.7.10...Checking python versionPython 3.7.10ACTION REQUIRED: Need to have python version greater than or equal to 3.8.0Share"
AWS Serverless SaaS Workshop - Issue - Python Version -
https://repost.aws/questions/QUf4OazLPGSwWA4KOogpft3w/aws-serverless-saas-workshop-issue-python-version
false
0There's a typo in the prerequisites script. The previous version has the correct commands:Admin:~/environment $ sudo alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1Admin:~/environment $ sudo alternatives --set python3 /usr/bin/python3.8Admin:~/environment $ python3 --versionPython 3.8.5The missing piece is the "1" that sets the priority for Python 3.8You can see the previous commit here: https://github.com/aws-samples/aws-serverless-saas-workshop/commit/b3c667ab46efff1b26e537214553001840ccbbf2#diff-98187ed6eefbc5f2a831d13c69ea2e74ac780dc49ac64774e29e0020a6e4fea4CommentShareCoreyanswered a year ago0Thanks for reporting this issue. This issue has been fixed while ago.CommentShareUjwal Bukkaanswered 4 months ago
"Hi,Im still trying to create "EKS optimized" Freebsd AMI using this guide - https://repost.aws/knowledge-center/eks-custom-linux-ami . So im using "awslabs" provided scripts (from here https://github.com/awslabs/amazon-eks-ami) that run "packer". After some time of troubleshooting im stuck at step of installing amazon-linux-extras. Where i can find what this package contain?Thank youFollowComment"
amazon-linux-extras package for Freebsd
https://repost.aws/questions/QUHMfAOj_rTYepIs-6YMLvoA/amazon-linux-extras-package-for-freebsd
true
"0Accepted AnswerFrom the title of your question am I right that the source AMI that you're using is FreeBSD? The knowledge doc you've linked to describes how to create an Amazon Linux (not FreeBSD) AMI to deploy with EKS. This would make sense as I wouldn't expect amazon-linux-extras to be available on FreeBSD.If you start off with an Amazon Linux AMI can you ovcecome this obstacle?PS one more thing - use Amazon Linux 2 (not 2023) as this one contains amazon-linux extras https://aws.amazon.com/linux/amazon-linux-2023/faqs/Q: Does AL2023 have Amazon-Linux-Extras like AL2?A: No, AL2023 does not have extras.CommentShareRWCanswered 7 days agoJoann Babak 6 days agoYes, if i will be using Amazon linux AMI with the script from the repo i have linked in the question everything works well, as it does have this package, but yeah, it seems that the script is not Freebsd compatible, even thought it is said "build you own custom eks optimized AMI"... sad.Share"
"Hello! I created an estimator with a metrics definition for the validation binary accuracy of the trained model as seen below:estimator = TensorFlow(entry_point=code_entry, source_dir=code_dir, role=sagemaker_role, instance_type='local', instance_count=1, model_dir='s3://some-bucket/model/', hyperparameters=hyperparameters, output_path='s3://some-bucket/results/', framework_version='2.8', py_version='py39', metric_definitions=[ { "Name": "binary_accuracy", "Regex": "val_binary_accuracy=(\d+\.\d+)?", } ], script_mode=True)According to Sagemaker's Estimator documentation, by setting the metrics definition, Sagemaker will extract metrics from the training logs using a regex:metric_definitions (list[dict[str, str] or list[dict[str, PipelineVariable]]) – A list of dictionaries that defines the metric(s) used to evaluate the training jobs. Each dictionary contains two keys: ‘Name’ for the name of the metric, and ‘Regex’ for the regular expression used to extract the metric from the logs.The trained model metrics are then available in the properties of the training job in the FinalMetricDataListobject.In the training logs I can see the log below, which matches the regex in the estimator's metrics definition:2wn9x0rkaf-algo-1-43gci | INFO:root:val_binary_accuracy=0.5But when I try to access the metric with the code below:step_training.properties.FinalMetricDataList['binary_accuracy'].Value... I get the following error in my conditional step that checks if the metric is higher than a threshold: Pipeline step 'CheckAccuracyEvaluation' FAILED. Failure message is: {'Get': "Steps.TrainingStep.FinalMetricDataList['binary_accuracy'].Value"} is undefined.I really don't understand why the binary_accuracy metric is undefined. The training model step always conclude successfully.How to get the metrics from a trained model using an estimator with script mode?Thank you in advance!FollowComment"
Estimator Metrics Extraction Problem
https://repost.aws/questions/QUZ2xR8P_wQx-j-Ee-5LKnxQ/estimator-metrics-extraction-problem
false
Ref: https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-identity-source-sso.htmlIt seems like there is no way to create Users via CLI or API !Am I missing something?Note: I am not after external identity provider (IdP) or Microsoft AD. I have simple use case but I wanted to change/manage AWS SSO users in bulk so looking for solution.FollowComment
Manage identities in AWS SSO - how to create Users via CLI or API ?
https://repost.aws/questions/QU2AqKCKl4QO-p1rbQc-6r1A/manage-identities-in-aws-sso-how-to-create-users-via-cli-or-api
false
"1I am afraid that the answer is no, it is currently not possible to create AWS SSO users via CLI. Having said that, there is already a feature request in place about such functionality. However, I will not be able to provide an ETA at present on when and if such a feature will be released.With that said, I would also suggest to have a look on this blog which shows "how to bulk import users and groups from CSV into AWS SSO" in case you are interested.https://aws.amazon.com/blogs/security/how-to-bulk-import-users-and-groups-from-csv-into-aws-sso/CommentShareSUPPORT ENGINEERNirmal_Kariaanswered a year ago1It's now possible using the new Identity Store API:https://docs.aws.amazon.com/singlesignon/latest/IdentityStoreAPIReference/welcome.htmlThat's a very, very, very good news.CommentSharerePost-User-9871739answered 7 months ago0Is it possible that with identitystore you only can manage local users (not users from a directory)? I don't see any option to sync any user/group from a AD Connector directory in cli help. Is it only possible to do that with web console?CommentSharerePost-User-7053050answered 5 months ago"
"I want to train a model using these pictures as training data: https://www.dropbox.com/s/n35jne68hjws2d4/runes.zip?dl=0And the test data will be this: https://imgur.com/4LQIUSX.jpgAnd this: https://imgur.com/LOKzEOp.jpgI have succeeded in labeling everything I need and my understanding is that this should be viable (or does anybody have any feedback on how my data should look like for this to work? Maybe I'm doing something wrong?)However I've tried to train my Rekognition model five times and it's failed every time with "Amazon Rekognition experienced a service issue. It seems to me that this is an internal failure in the service, can I get some feedback on what I can do?FollowComment"
[AWS Rekognition] "Amazon Rekognition experienced a service issue" Internal Failure and can't train a model.
https://repost.aws/questions/QUd020xg7VQe2RVlwgqEWu1g/aws-rekognition-amazon-rekognition-experienced-a-service-issue-internal-failure-and-can-t-train-a-model
false
"0Thanks for reaching out. Can you please share the AWS Region in which you ran these trainings and approximate time of trainings to help us debug the failures.CommentShareVipul-at-Awsanswered a year agonullset2 a year agoI used us-west-2 (Oregon) and training runs for about 30 minutes before it fails. I tried it many times over Saturday and Sunday, last was around 1 or 2 pm on Sunday.If this helps, I'm documenting my journey https://the-null-log.org/post/675396509586604032/i-developed-a-sheikah-rune-translator-with-awsShare"
"If there is actual documentation on this, my apologies as I've hunted several hours for the solution. As an Oracle DBA, I need to monitor certain log files for certain strings. For the listener log, one type of message contains "service_update * <db_name> * STATUS * [0 or non-zero]." I can parse everything up to the asterisk but even with double-quote delimiters, I cannot figure out how to include the end of that string. Specifically, I need to alert whenever a non-zero status is thrown.Similarly, for the alert log I need to flag messages containing "ORA-" error messages except for 'ORA-1". Thanks in advance for your aid..FollowComment"
CloudWatch trouble parsing @message for a string with wildcards.
https://repost.aws/questions/QUi0HlYeqoTZWDryILAfFp_Q/cloudwatch-trouble-parsing-message-for-a-string-with-wildcards
true
"0Accepted AnswerHello, thanks for reaching out!While I'm not totally familiar with the full format of Oracle logs, hopefully these examples can help out here. Also, I wasn't fully clear on if you're attempting to parse these logs using Logs Insights or for setting up a Metric Filter on a log group for alerting, so I'll provide examples for both.For testing, I used the following made-up sample logs messages based on the snippet you provided. The samples ahead assume that the logs are space delimited:2022-04-13 00:00:28 service_update DB_1 STATUS * 02022-04-13 00:00:29 service_update DB_1 STATUS * 12022-04-13 00:00:30 service_update DB_1 STATUS * 12022-04-13 00:00:31 service_update DB_1 STATUS * 12022-04-13 00:00:32 service_update DB_1 STATUS * 12022-04-13 00:00:33 service_update DB_1 STATUS * 02022-04-13 00:00:34 service_update DB_1 STATUS * 12022-04-13 00:00:35 service_update DB_1 STATUS * 12022-04-13 00:00:36 service_update DB_1 STATUS * 02022-04-13 00:00:37 service_update DB_1 STATUS * 12022-04-13 00:00:38 service_update DB_1 STATUS * 12022-04-13 00:00:39 service_update DB_1 STATUS * 02022-04-13 00:00:40 service_update DB_1 STATUS * 1In Logs Insights, the following query would return only log messages where the status is not equal to 0 by parsing the string to seven unique fields:fields @timestamp| parse @message "* * * * * * *" as date, time, action, db, type, asterisk, status| filter status!=0| sort time descSimilarly, if you wanted to create a metric filter on the log group to generate a metric for non-zero status in order to create an alarm, the following metric filter pattern successfully parses and filters for status!=0:[date, time, action, db, type, asterisk, status!=0]On the same note, for your alert logs (again assuming space delimiting), given the sample logs:2022-04-13 00:00:28 service_update DB_1 STATUS ORA-1 02022-04-13 00:00:29 service_update DB_1 STATUS ORA-2 12022-04-13 00:00:30 service_update DB_1 STATUS ORA-3 12022-04-13 00:00:31 service_update DB_1 STATUS ORA-8 1You can generate a metric filter pattern to monitor and filter only for logs that contain "ORA-*" except for "ORA-1":[date, time, action, db, type, error!=ORA-1, status]Hope this helps, let me know if my assumptions were incorrect!CommentShareSUPPORT ENGINEERJustin_Banswered a year agoI-used-to-be-a-Buffalo 8 months agotyvm, this was on hold for a while and I am just now resuming work. A pleasure to come across your input. By any chance have you come across these two refinements? Is there support for regular expressions, and secondly is there a preferred method for timestamp comparisons? In the first query, ORA- error messages to ignore are the set of ORA-0, ORA-00000, ORA-1, or ORA-00001. In SQLPlus syntax I would simply query for the four strings using an IN clause.Traditionally I would query for something like @timestamp > sysdate - 30 to search the most recent half-hour. TIA if either of these piques your interest.Share"
We're looking into using Amazon WorkSpace with Managed Microsoft AD. Is it true that those two services are billed separately?FollowComment
Is AWS Managed Microsoft AD billed separately from Amazon WorkSpaces?
https://repost.aws/questions/QUWu0adC1nSnmX3CtKhj22Hw/is-aws-managed-microsoft-ad-billed-separately-from-amazon-workspaces
true
"0Accepted AnswerYes, AWS Managed Microsoft AD would be billed separately from any Amazon WorkSpaces that you have deployed. Amazon Workspaces pricing includes the use of AWS Directory Services for Simple AD and AD Connector (where available). The use of AWS Directory Services for Microsoft AD isn't included.For more information, see Amazon WorkSpaces pricing.CommentShareEXPERTSpencerAtAWSanswered 3 years agoEXPERTJeremy_Greviewed 2 years ago"
"Good afternoon,I am have an AWS EC2 Instance running centos and PHP 7.4.28. The libpng version included is 1.5.13. There are several bugs in this version, and would like to upgrade to the latest release of 1.6.37. However, yum update cannot find any packages to update.Is there any plan to make 1.6.37 available?Thanks,BenFollowCommentAbayomi a year agoHey!You should try :$ wget https://downloads.sourceforge.net/project/libpng/libpng16/older-releases/1.6.37/libpng-1.6.37.tar.gz -O libpng.tar.gz$ tar -xvf libpng.tar.gz$ cd libpng-1.6.37$ sudo bash configure --prefix=/usr/local/libpng$ sudo make installShare"
Upgrade libpng to 1.6
https://repost.aws/questions/QUOTdf-9pcQayvg4FYmDPq5A/upgrade-libpng-to-1-6
false
"0Hey! Thank you for reaching out with your question.On a running CentOS system with PHP 7.4 and libpng1.5 already configured, you can use the following steps to upgrade to libpng1.6- 1. Install wget to your instance for internet package download capability by running - $ sudo yum install wget2. Ensure you have gcc installed to have an available path compiler for packages - $ yum install gcc3. Install libpng1.6 to your instance with the following steps$ wget https://download.sourceforge.net/libpng/libpng-1.6.37.tar.gz$ tar xvfz libpng-1.6.37.tar.gz$ cd libpng-1.6.37$ ./configure --prefix=/usr/local/libpng/1_6_37 NOTICE- If you receive a "lib not installed" prompt after this step run the following command before continuing with later steps - $ sudo yum install zlib-devel $ make$ make install And there you go, now libpng 1.6 should be up and running on your systemCommentSharerePost-User-8939786answered 9 months agoAWS-User-3445646 9 months agoGoing this route, I am assuming I will need to rebuild PHP? Is there an amazon php package for PHP 7.4 that includes the latest (1.7.37) version of libpng?Share"
"I am new to AWS E2. I have allocated an elastic IP address successfully. But when I tried to associate it with a running instance, there is no instance listed on the web console.I tried searching and reading how elastic IP address works, but did not see anything that is about this issue. (the instance has beenn running well with a public address).anyone out there can help?FollowComment"
no instance is listed when I tried to associate an elastic IP address
https://repost.aws/questions/QU-yQxCnUoT8yVSWvuJ38dIA/no-instance-is-listed-when-i-tried-to-associate-an-elastic-ip-address
false
"1Always double-check that you are in the correct region :)CommentShareJohn Voldanswered 2 months agorePost-User-4791185 2 months agothanks for the help. i checked. the instance is in east-1 and elastic ip address is reserved also in east-1 (east-1d). so this is not the root problem.Share0Sometimes I've seen this sort of behavior because of a browser issue. Can you try a different browser to see if it will populate the drop down.You could always startup a CloudShell session and run the following command, supplying your specific instance-id and allocation-id:aws ec2 associate-address --instance-id i-01234567890 --allocation-id eipalloc-9876543CommentShareEXPERTkentradanswered 2 months agorePost-User-4791185 2 months agothanks for the help. tried different browsers (w/ and w/o private browsing too). it did not resolve the issue.Share"
Hello!We have a mac1.metal instance working for deployment and it was created a few years ago with Catalina AMI (amzn-ec2-macos-10.15.7-20201204-234310). We use this instance to build iOS/Android Apps successfully until Apple changed the minimum SDK for a build to iOS 15 that requires Xcode 13 so we need to upgrade the OS to the last version.I made a snapshot of the current instance and then I tried to upgrade it but when the machine restarted I got stuck there because I think it requires physical access to confirm the OS installation so I rollbacked the instance with the snapshot I previously created.It is possible to upgrade the instance without losing any data/configuration? I searched on the AWS documentation but I didn't see how to upgrade the OS.Thank you!FollowComment
How to upgrade Mac OS version on mac1.metal instance?
https://repost.aws/questions/QUs34hws9rT2a4jUnbbQ-hPA/how-to-upgrade-mac-os-version-on-mac1-metal-instance
false
"0These steps should help update the OS, but most use our vended AMIs for the new OS for IaC and automation purposes. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-mac-instances.html#mac-instance-updatesCommentShareSeanManswered a year agorePost-User-2848374 a year agoHello SeanM!Thanks for the reply. Yes, I see that on the documentation and I tried to update following the steps to list the available updates but I only have update for Xcode 12 and macOS Catalina.Label: Command Line Tools for Xcode-12.4Title: Command Line Tools for Xcode, Version: 12.4, Size: 440392K, Recommended: YES,Label: Safari15.5CatalinaAuto-15.5Title: Safari, Version: 15.5, Size: 92202K, Recommended: YES,Label: macOS Catalina Security Update 2022-004-10.15.7Title: macOS Catalina Security Update 2022-004, Version: 10.15.7, Size: 2033153K, Recommended: YES, Action: restart,So when I tried to update the os to BigSur I did it using Screen sharing and the Store but when I did this I got stuck and the machine didn't start so I rollback to the snapshot. Searching a little bit more I found on one site they said "Do not upgrade a computer unless you have physical access to it. Attempting to upgrade a computer remotely will drop the remote connection leaving the computer stuck waiting for local user input."Share"
"Hi,When issuing queries on a large table Redshift risks running out of disk space.What are best practices to limit queries to a reasonable disk usage?This is vital to daily operation and an essential security feature if Redshift is used in conjunction with external services.After running out of disk space (160GB total, 80GB used) I vacuumed the table and reduced the size from around 80 GB to 1 GB, but I still experience significant spikes in disk usage with simple queries.Obviously there has to be a way to prevent users from killing the database by issuing a few select queries and I just don't know about it, so I would greatly appreciate your advice.This example query uses up to 10 GB of disk space for more than a minute:explain select * from my_schema.mytable order by created_time limit 1;------------------------------------------------------------------------------------------------------------------------------------------------XN Limit (cost=1000005105905.92..1000005105905.92 rows=1 width=4527)-> XN Merge (cost=1000005105905.92..1000005199888.98 rows=37593224 width=4527)Merge Key: created_time-> XN Network (cost=1000005105905.92..1000005199888.98 rows=37593224 width=4527)Send to leader-> XN Sort (cost=1000005105905.92..1000005199888.98 rows=37593224 width=4527)Sort Key: created_time-> XN Seq Scan on event (cost=0.00..375932.24 rows=37593224 width=4527)------------------------------------------------------------------------------------------------------------------------------------------------[EDIT]I limited the query via WLM to only be able to spill 1G to disk, but it does not abort the query even though it takes up way more disk space. The configuration works as expected otherwise - when I limit the time the query can take it aborts as expected. My guess is that it does not consider the way the query takes up disk space as spilling memory to disk. Please confirm if that is correct.Cheers,Johannes M.FollowComment"
Redshift protect against running out of disk space due to select queries
https://repost.aws/questions/QUBhRJZQcVR-q1MvEVN7YniQ/redshift-protect-against-running-out-of-disk-space-due-to-select-queries
false
"1What is usually the cause of a CPU spike like what you're describing is if you are loading into a table without any compression settings. The default setting for COPY is that COMPUPDATE is ON. What happens is that Redshift will take the incoming rows, run them through every compression setting we have and return the the appropriate (smallest) compression.To fix the issue, it's best to make sure that compression is applied to the target table of the COPY statement. Run Analyze Compression command if necessary to figure out what the compression should be and manually apply it to the DDL. For temporary tables LZO can be an excellent choice to choose because it's faster to encode on these transient tables than say ZSTD. Just to be sure also set COMPUPDATE OFF in the COPY statement.CommentShareAWS_Guyanswered 3 months agojohannes m 3 months agoOh sorry, maybe I accidentally wrote that it is a CPU spike somwhere. The spike is in the disk size not CPU :)I am quite happy with the current compression rate as it is, but thanks for your recommendation! I will be sure to keep it in mind.Share"
"When we would like to spin up Windows 11 EC2 instances, we go the following error:ENA must be supported with uefi boot-modeWe tried out many things, but without success.Thanks,FollowCommentrePost-User-3196201 a year agoI feel like I should add a few details, but it was too long for comment and definitely it wasn't an answer. So I've asked my own question. I feel like the problem is the same.https://repost.aws/questions/QUJbR6AqiOSU2rHHkkyGEB0w/client-error-ena-must-be-supported-with-uefi-boot-modeShare"
Does AWS support Windows 11 EC2 Instances?
https://repost.aws/questions/QUqKQIF1cdQrq6h3hb8yJYiw/does-aws-support-windows-11-ec2-instances
false
"0It sounds like you may be trying to deploy an instance type that does not support UEFI. Can you check whether you are using a current generation instance type such as c5, m5, r5, etc? (full list here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#current-gen-instances)CommentSharebenduttanswered a year agoSildarud a year agoSorry, I wasn't concrete, I missed an important information: We get this error message, when we would like to convert the uploaded image to AMI.Our process:Build the Win11 image locallyUpload the image to S3 bucketConvert the image to AMIThe above solution works in case of Win10, but in case of Win11, we got the mentioned error message.Share0VM Import/Export added support for Windows 11 images (and Secure Boot). After importing the UEFI image, you do need to use a supported instance typeCheck out the blog post Bringing your Windows 11 image to AWS with VM Import/Export for more info.CommentShareEXPERTAndrew_Ranswered 10 months ago"
"Hi there,I've been working on linking up API Gateway to our ECS services for the last few months and as I've started to near completion I've noticed that network requests through to our ECS containers are taking a really long time (around 10000 ms). I've been testing this with a simple GET request into out containers and seeing that the latency seems to occur at the request at the Network Load Balancer.Execution log for request c4cdc901-4641-11e9-ae0f-7ba6f11e24e4Thu Mar 14 10:13:10 UTC 2019 : Starting execution for request: c4cdc901-4641-11e9-ae0f-7ba6f11e24e4Thu Mar 14 10:13:10 UTC 2019 : HTTP Method: GET, Resource Path: /app1/heartbeatThu Mar 14 10:13:10 UTC 2019 : Method request path: {proxy=heartbeat}Thu Mar 14 10:13:10 UTC 2019 : Method request query string: {}Thu Mar 14 10:13:10 UTC 2019 : Method request headers: {}Thu Mar 14 10:13:10 UTC 2019 : Method request body before transformations:Thu Mar 14 10:13:10 UTC 2019 : Endpoint request URI: http://nlb-bc8abe52e89e8e17.elb.eu-west-1.amazonaws.com:9004/heartbeatThu Mar 14 10:13:10 UTC 2019 : Endpoint request headers: {x-amzn-apigateway-api-id=<redacted>, User-Agent=AmazonAPIGateway_<redacted>, Host=nlb<redacted>.elb.eu-west-1.amazonaws.com}Thu Mar 14 10:13:10 UTC 2019 : Endpoint request body after transformations:Thu Mar 14 10:13:10 UTC 2019 : Sending request to http://nlb<redacted>.elb.eu-west-1.amazonaws.com:9004/heartbeatThu Mar 14 10:13:20 UTC 2019 : Received response. Integration latency: 10135 msThu Mar 14 10:13:20 UTC 2019 : Endpoint response body before transformations: {"err":false,"output":"Alive"}Thu Mar 14 10:13:20 UTC 2019 : Endpoint response headers: {X-Powered-By=Express, Access-Control-Allow-Origin=, Content-Type=application/json; charset=utf-8, Content-Length=30, ETag=W/"1e-3OO8BxIRzGH40uT/pZj6IlqId5s", Date=Thu, 14 Mar 2019 10:13:20 GMT, Connection=keep-alive}Thu Mar 14 10:13:20 UTC 2019 : Method response body after transformations: {"err":false,"output":"Alive"}Thu Mar 14 10:13:20 UTC 2019 : Method response headers: {X-Powered-By=Express, Access-Control-Allow-Origin=, Content-Type=application/json; charset=utf-8, Content-Length=30, ETag=W/"1e-3OO8BxIRzGH40uT/pZj6IlqId5s", Date=Thu, 14 Mar 2019 10:13:20 GMT, Connection=keep-alive}Thu Mar 14 10:13:20 UTC 2019 : Successfully completed executionThu Mar 14 10:13:20 UTC 2019 : Method completed with status: 200When I attempt this request in the test section of API Gateway This is the standard response. Generally with a latency varying from 5 seconds to at one point 20 seconds. Occasionally I do get through quickly:Execution log for request d8775b9d-4641-11e9-b83c-7fdec07590a2Thu Mar 14 10:13:43 UTC 2019 : Starting execution for request: d8775b9d-4641-11e9-b83c-7fdec07590a2Thu Mar 14 10:13:43 UTC 2019 : HTTP Method: GET, Resource Path: /app1/heartbeatThu Mar 14 10:13:43 UTC 2019 : Method request path: {proxy=heartbeat}Thu Mar 14 10:13:43 UTC 2019 : Method request query string: {}Thu Mar 14 10:13:43 UTC 2019 : Method request headers: {}Thu Mar 14 10:13:43 UTC 2019 : Method request body before transformations:Thu Mar 14 10:13:43 UTC 2019 : Endpoint request URI: http://nlb<redacted>.elb.eu-west-1.amazonaws.com:9004/heartbeatThu Mar 14 10:13:43 UTC 2019 : Endpoint request headers: {x-amzn-apigateway-api-id=<redacted>, User-Agent=AmazonAPIGateway_<redacted>, Host=nlb<redacted>.elb.eu-west-1.amazonaws.com}Thu Mar 14 10:13:43 UTC 2019 : Endpoint request body after transformations:Thu Mar 14 10:13:43 UTC 2019 : Sending request to http://nlb<redacted>.elb.eu-west-1.amazonaws.com:9004/heartbeatThu Mar 14 10:13:43 UTC 2019 : Received response. Integration latency: 10 msThu Mar 14 10:13:43 UTC 2019 : Endpoint response body before transformations: {"err":false,"output":"Alive"}Thu Mar 14 10:13:43 UTC 2019 : Endpoint response headers: {X-Powered-By=Express, Access-Control-Allow-Origin=, Content-Type=application/json; charset=utf-8, Content-Length=30, ETag=W/"1e-3OO8BxIRzGH40uT/pZj6IlqId5s", Date=Thu, 14 Mar 2019 10:13:43 GMT, Connection=keep-alive}Thu Mar 14 10:13:43 UTC 2019 : Method response body after transformations: {"err":false,"output":"Alive"}Thu Mar 14 10:13:43 UTC 2019 : Method response headers: {X-Powered-By=Express, Access-Control-Allow-Origin=, Content-Type=application/json; charset=utf-8, Content-Length=30, ETag=W/"1e-3OO8BxIRzGH40uT/pZj6IlqId5s", Date=Thu, 14 Mar 2019 10:13:43 GMT, Connection=keep-alive}Thu Mar 14 10:13:43 UTC 2019 : Successfully completed executionThu Mar 14 10:13:43 UTC 2019 : Method completed with status: 200But these are far and few between, requests to the ECS container directly are always under 100ms and I can see no latency on it's side. I have also had a look at the health checks and the host has been healthy consistently since it started up. I also ran a couple of artillery tests to the endpoint and the got the following results for 50 requests per second for one minute:All virtual users finishedSummary report @ 10:50:10(+0000) 2019-03-14Scenarios launched: 3000Scenarios completed: 3000Requests completed: 3000RPS sent: 42.6Request latency:min: 227.9max: 10990.1median: 366.5p95: 9892.2p99: 10328.3Scenario counts:0: 3000 (100%)Codes:200: 3000When I drop the frequency of requests down to 1 request per second for 5 minutes I see a large rise in the median request latency:All virtual users finishedSummary report @ 10:39:13(+0000) 2019-03-14Scenarios launched: 300Scenarios completed: 300Requests completed: 300RPS sent: 0.97Request latency:min: 213.8max: 10286.3median: 4882.1p95: 10077.8p99: 10188.8Scenario counts:0: 300 (100%)Codes:200: 300This rise in median makes me think that AWS are reusing resources between frequent requests but I still see a large 10 second delay on occasional requests which to me seems overly long. It makes the frontends feel clunky and under-optimised and it's incredibly disheartening.Has anyone ever came across issues like this? I have looked and I believe I can attribute around 500-1000 ms to the API gateway Custom Domain Name distribution for reusing TLS connections. Apart from that I am a little stumped as to what can account for 10 seconds per request.FollowComment"
API Gateway to NLB via VPCLink latency issues
https://repost.aws/questions/QU4zhD4Tb4Qg6z9U1YXeMAxg/api-gateway-to-nlb-via-vpclink-latency-issues
false
"0Hello:Please see this forum thread for a common cause of the 5s-10s delay you are seeing:https://forums.aws.amazon.com/thread.jspa?messageID=871957&#871957Regards,BobCommentShareEXPERTAWS-User-0395676answered 4 years ago0Hi there Bob,This was exactly the answer I needed. I've resolved the issue by enabling Cross Zone Load Balancing onto the Network Load Balancer. I checked my health checks and they seem fine and I cannot reduce the zones that the NLB is provisioned in as the target group it is targeting is auto scaling across all 3 Availability Zones. Whilst testing I only had 1 container running. I believe there is no way to adjust the health checks to tell API Gateway to only go the AZ with a healthy target present? As this would remove the need for Cross Zone Load Balancing. But with it enabled I'm consistently getting sub 30 ms.Thank you!!CommentSharebrycedevanswered 4 years ago"
"I recently added an ElasticCache Redis cluster to my ElasticBeanstalk (EB) environment. When I updated my environment the application could no longer reach the redis cluster and hung, failing the application startup. I had to update the redis security group I had created to include the new EB environment. Is there a way this can happen automatically when I launch a new EB environment?FollowComment"
automatically update ElasticCache security group with new ElasticBeanstalk environment
https://repost.aws/questions/QUh1fsJvsNTy-biZM_2fsOqg/automatically-update-elasticcache-security-group-with-new-elasticbeanstalk-environment
true
"0Accepted AnswerI'm assuming the Elasticache resource is defined outside your EB environment. It is also possible to create the Elasticache cluster within EB eg in this example.If defined outside we can still use ebextensions within your EB environment to dynamically update the security group ingress rules for this elasticache cluster. You will want to create a Security Group Ingress rule for the existing security group. You will need the security group id for the cluster to be saved in SSM parameters, cloudformation export or statically defined within the ebextension yaml file. AWSEBSecurityGroup is what you will use as a Ref within the security group rule (This is the group which is attached to your instances which EB creates) - see here and the example which includes the Elasticache clusterCommentShareEXPERTPeter_Ganswered a month agorePost-User-7887088 a month agoThanks Peter, This helps a lot. After reading your references I settled on using the example here.However, I don't know how to get the redis endpoint URL I need to connect from my app. I expect I need to use a Fn::GetAtt for ConfigurationEndpoin.Address in the config file but how do I then make that visible to my app? I currently set a REDIS_URL environment variable with the URL of the redis cluster I created manually.SharerePost-User-7887088 a month agoI've pieced together a way to get the redis URL that works but I'm not entirely comfortable with it. First I pull the MyElastiCache reference in setup.config by using: AWS_REDIS_NODE = '`{ "Ref" : "MyElastiCache" }`'In my application I can then build my URL like this: redis_url = 'redis://' + app.config.get('AWS_REDIS_NODE') + '.qihti6.0001.use1.cache.amazonaws.com:6379'I'm not sure I can rely on the last part of the string concat. Is there a better way to get the endpoint?SharePeter_G EXPERTa month agoUse what you mentioned previously for AWS_REDIS_NODE but combine with Sub. EG in yaml.!Sub "redis://${MyElastiCache.ConfigurationEndpoint.Address}:${${MyElastiCache.ConfigurationEndpoint.Port}"SharerePost-User-7887088 a month agoThanks again Peter,This got me looking in the right direction. I had to use RedisEndpoint because cluster mode is disabled for my setup. There was an extra "${" typo in your response. I ended up with this statement in my setup.config:REDIS_URL = '`{ "Fn::Sub": "redis://${MyElastiCache.RedisEndpoint.Address}:${MyElastiCache.RedisEndpoint.Port}" }`'and I now use: redis_url = app.config.get('REDIS_URL')in my app. Everything works fine now.Share"
"hi guys!I am researching Amazon DynamoDB.About the Primary key, I know that there are 2 options to setup Primary key (Partition key only and Partition key + Sort key). The Partition key + Sort key must be unique.In the picture below, I don't know what should we do if we have to add the 3rd record that show the same user_id play the same game_id but they have the "draw" Result.If you know, please tell me!FollowCommentUwe K 10 months agoHi Steven,what does the game_id stand for?A game type, like "Kart Racing", "Whack-a-Mole"?Or different rounds of the same game?Because, in case of 1) you'd might need more keys if you want more than one result per user+game.Share"
About DynamoDB Primary key
https://repost.aws/questions/QUX-KFqUKOSX-X-g57UIC9EA/about-dynamodb-primary-key
true
"0Accepted AnswerYou should probably add more to the sort key like a gameplay_id for the particular play of the game. So your SK becomes "1234#de45a" which is a game_id#gameplay_id concatenated together where de45a is the particular run of the game. Then every game play is tracked, and you can easily query to find all plays by a user, all plays of a user for a particular game, and any particular game play.You might have different query needs. The data model you pick is directly based on the update and query pattern. I'm just guessing at your query pattern.CommentSharejzhunteranswered 10 months agoSteven 10 months agothank you!Sharejzhunter 10 months agoSure thing, feel free to accept the answer.Share"
"AWS SSO using Okta.Working on implementing a AWS Client VPN that also uses the Okta authentication. I've enabled the AWS Client VPN app through Okta and have used the meta-data to create an Identity provider for the Client VPN. I've been able to successfully use the AWS Client VPN, it does the Auth through Okta, so I know that it works. But the Second I change my Authorization Routes in the AWS Client VPN to use a Group ID, I loose access to my resources. I've attempted to use the Okta Group ID, as well as the AWS SSO Group Id provided through Amazon. But neither Group ID seems to take. I've also attempted to put the Group Name in the 'Group ID' field with no success.FollowComment"
AWS Client VPN unable to set Authorization Route with Group ID using Okta
https://repost.aws/questions/QUv5yFx9PZT3CqyPoHXtn3Eg/aws-client-vpn-unable-to-set-authorization-route-with-group-id-using-okta
true
"0Accepted AnswerI was able to figure this out. I have a screenshot of this on my Okta Community post, but I needed to specify in the Okta AWS Client VPN Settings that memberOf was equal to .* (Notice the period there). Apparently you need specify which Okta Groups get sent over to the AWS Client VPN. So this wild card sends all groups over and then you can set up your authorization routes using the Group Name for the Group ID.https://support.okta.com/help/s/question/0D54z00007QD9yTCAT/aws-client-vpn-unable-to-set-authorization-route-with-group-id-using-okta?language=en_USCommentShareAWS-User-1278126answered a year ago"
Question about Route53 Private Zones. Customer is trying to retire their BIND infrastructure and go pure r53 but ran into an issue with one of their domains. They use subdomains of (test.com for example) across several accounts and the lack of support for delegation/ns records in private zones is blocking from moving fully into the solution.Are there any plans to support delegation in private zone in the near future?FollowComment
Route53 Private Zones support delegation
https://repost.aws/questions/QUy37Y8njlRxOE-o0lRAKmjw/route53-private-zones-support-delegation
true
"0Accepted AnswerDelegation of private hosted zone within AWS is usually not necessary as long as you only want to use Private Hosted Zones (PHZ). It is mostly baked into the functionality of R53 and can also be extended to on-premises resolution with R53 Resolver endpoints.Only if you want to delegate a sub-zone from or to a non-R53 Auth NS, e.g. on-premises, will you face a feature gap.In your case, where the customer wants to retire all non-R53 Auth NS, they shouldn't be facing this issue.The idea behind DNS delegation is to delegate authority of a part of the namespace to another entity (running their own Auth NS). With R53 you can achieve the same by just using Private Hosted Zones.As such a central team could be in charge of the PHZ "example.com", while a developer team is in charge of "team1.example.com".From a resolution perspective, you can now assign both of these above PHZ to the same VPC. While Route 53 considers this setup an "overlapping namespace", the resulting Resolver rules will give you more or less the same behavior as if you would have delegated the subdomain.If you now deploy a R53 Outbound Resolver endpoint into that same VPC (which has visibility to both of the above zones), will you get the same resolution from on-premises.In case the above domains of example.com and team1.example.com need to be split across R53 and an on-premises Auth NS, you will face the lack of support for delegating a sub-zone from or to a R53 Private Hosted Zone. In some cases, you can work around this with DNS forwarding.CommentShareEXPERTChrisE-AWSanswered 3 years ago"
"Hi there,I am using an EC2 instance (with an FPGA Developer AMI ---centos---) to build one of the cpp kernel examples. The creation of the xclbin work just fine, but when I try to run the create_vitis_afi.sh script, I get the following error:An error occurred (InvalidAction) when calling the CreateFpgaImage operation: The action CreateFpgaImage is not valid for this web service.I am trying to obtain an awsxclbin file that I can use to program an FPGA on a F1 machine.I have admin role in the account where I tried it. I tried in two regions: Paris and Virginia.Thanks in advance for any hint.Best,MedranoEdited by: medrano on Jun 14, 2020 4:14 AMFollowComment"
The action CreateFpgaImage is not valid for this web service
https://repost.aws/questions/QUSt4Lx0FVS_2r-4w94Ds9QQ/the-action-createfpgaimage-is-not-valid-for-this-web-service
true
"0Accepted AnswerHi Medrano,Paris (eu-west-3) isn't currently supported for F1, so the error is expected in this case. See https://github.com/aws/aws-fpga#gettingstarted for a list of supported GA regions.N. Virginia (us-east-1) is supported for F1, so the error there is not expected. My guess would be that the invocation of the create-fpga-image call in the create_vitis_afi.sh script is submitting its request to an endpoint in one of the AWS regions where F1 is not supported (Paris in this case, perhaps.)Are you passing an -awsprofile flag to your invocation of the create_vitis_afi.sh script? If so, can you confirm that the region associated with that profile is set to us-east-1?If you're not including an -awsprofile flag when you invoke the create_vitis_afi.sh script, then the script will typically choose a region based on the process environment. Can you confirm that the AWS_DEFAULT_REGION environment variable is set to "us-east-1" here?Thanks,EdenCommentShareAWS-User-3026747answered 3 years ago0Thanks EdenThat was it. The configured region. Thanks a lot!Best,MedranoCommentShareanswered 3 years ago"
代码行导包无颜色,一直处于运行状态启动notebook出现There was a problem loading the notebook. Please try again later.FollowComment
aws实验室中的jupyterlab一直运行,不出结果
https://repost.aws/questions/QULogyhfrJSO2804UcEi6NLA/aws%E5%AE%9E%E9%AA%8C%E5%AE%A4%E4%B8%AD%E7%9A%84jupyterlab%E4%B8%80%E7%9B%B4%E8%BF%90%E8%A1%8C%EF%BC%8C%E4%B8%8D%E5%87%BA%E7%BB%93%E6%9E%9C
false
"I am facing an issue while loading data from CSV file(s) in S3 -> Aurora MySQL.While the load completes successfully with all rows, the last column (Total Profit) however is loaded as 0.The column is decimal type and I have defined it as below in S3 table mapping:“ColumnType”: “NUMERIC”,“ColumnPrecision”: “10",“ColumnScale”: “2"In the table the column is defined as decimal(10,2)I have other columns in the file of the same type and they load correctly.I have used other methods to load the same file and it loads fine.In the log I can only see one warning however it also doesn't make sense as OrderID is a Integer. In table is it defined as Int(11).2021-06-25T17:14:33 [TARGET_LOAD ]W: Invalid BC timestamp was encountered in column 'OrderID'. The value will be truncated on the target to the timestamp: 874708545 (csv_target.c:173)Below is the S3 transformation:{"TableCount": "1","Tables": [{"TableName": "orders","TablePath": "div/yyyy-mm-dd/","TableOwner": "div","TableColumns": [{"ColumnName": "OrderID","ColumnType": "INT4"},{"ColumnName": "Country","ColumnType": "STRING","ColumnLength": "50"},{"ColumnName": "Item Type","ColumnType": "STRING","ColumnLength": "30"},{"ColumnName": "Sales Channel","ColumnType": "STRING","ColumnLength": "10"},{"ColumnName": "Order Priority","ColumnType": "STRING","ColumnLength": "5"},{"ColumnName": "Order Date","ColumnType": "DATE"},{"ColumnName": "Region","ColumnType": "STRING","ColumnLength": "80"},{"ColumnName": "Ship Date","ColumnType": "DATE"},{"ColumnName": "Units Sold","ColumnType": "INT2"},{"ColumnName": "Unit Price","ColumnType": "NUMERIC","ColumnPrecision": "5","ColumnScale": "2"},{"ColumnName": "Unit Cost","ColumnType": "NUMERIC","ColumnPrecision": "5","ColumnScale": "2"},{"ColumnName": "Total Revenue","ColumnType": "NUMERIC","ColumnPrecision": "10","ColumnScale": "2"},{"ColumnName": "Total Cost","ColumnType": "NUMERIC","ColumnPrecision": "10","ColumnScale": "2"},{"ColumnName": "Total Profit","ColumnType": "NUMERIC","ColumnPrecision": "10","ColumnScale": "2"}],"TableColumnsTotal": "14"}]}Sample data:535113847,Azerbaijan,Snacks,Online,C,2014-10-08,Middle East and North Africa,2014-10-23,934,152.58,97.44,142509.72,91008.96,51500.76874708545,Panama,Cosmetics,Offline,L,2015-02-22,Central America and the Caribbean,2015-02-27,4551,437.2,263.33,1989697.2,1198414.83,791282.37854349935,Sao Tome and Principe,Fruits,Offline,M,2015-12-09,Sub-Saharan Africa,2016-01-18,9986,9.33,6.92,93169.38,69103.12,24066.26892836844,Sao Tome and Principe,Personal Care,Online,M,2014-09-17,Sub-Saharan Africa,2014-10-12,9118,81.73,56.67,745214.14,516717.06,228497.08Table definition:mysql> describe orders;----------------------------------------------------------+| Field | Type | Null | Key | Default | Extra |----------------------------------------------------------+| OrderID | int(11) | NO | | NULL | || Country | varchar(50) | NO | | NULL | || Item Type | varchar(30) | NO | | NULL | || Sales Channel | varchar(10) | NO | | NULL | || Order Priority | varchar(5) | NO | | NULL | || Order Date | date | NO | | NULL | || Region | varchar(80) | NO | | NULL | || Ship Date | date | NO | | NULL | || Units Sold | smallint(6) | NO | | NULL | || Unit Price | decimal(5,2) | NO | | NULL | || Unit Cost | decimal(5,2) | NO | | NULL | || Total Revenue | decimal(10,2) | NO | | NULL | || Total Cost | decimal(10,2) | NO | | NULL | || Total Profit | decimal(10,2) | NO | | NULL | |----------------------------------------------------------+Post Load Output:mysql> select * from orders limit 3;-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| OrderID | Country | Item Type | Sales Channel | Order Priority | Order Date | Region | Ship Date | Units Sold | Unit Price | Unit Cost | Total Revenue | Total Cost | Total Profit |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| 535113847 | Azerbaijan | Snacks | Online | C | 2014-10-08 | Middle East and North Africa | 2014-10-23 | 934 | 152.58 | 97.44 | 142509.72 | 91008.96 | 0.00 || 874708545 | Panama | Cosmetics | Offline | L | 2015-02-22 | Central America and the Caribbean | 2015-02-27 | 4551 | 437.20 | 263.33 | 1989697.20 | 1198414.83 | 0.00 || 854349935 | Sao Tome and Principe | Fruits | Offline | M | 2015-12-09 | Sub-Saharan Africa | 2016-01-18 | 9986 | 9.33 | 6.92 | 93169.38 | 69103.12 | 0.00 |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+3 rows in set (0.00 sec)Kindly suggestEdited by: DivAWS on Jun 27, 2021 11:20 AMFollowComment"
"DMS loads last column as 0, Source CSV file in S3, Target Aurora"
https://repost.aws/questions/QUO-C7auLXQVaU0katXUjnvA/dms-loads-last-column-as-0-source-csv-file-in-s3-target-aurora
false
"0I found the solution to the issue with the help of AWS support.Use /r/n as your Row delimiter.It really depends on where you created your data file. In Windows both a CR (/r) and LF(/n) are required to note the end of a line, whereas in Linux/UNIX a LF (/n) is only required.CommentShareDivAWSanswered 2 years ago"
"Why am I getting the "You Need Permissions" error message when trying to close my AWS account as the Root User? I have removed all member accounts as instructed and the Root User has access to the billing information.The message is as follows:You Need PermissionsYou don't have permission to access billing information for this account. Contact your AWS administrator if you need help. If you are an AWS administrator, you can provide permissions for your users or groups by making sure that (1) this account allows IAM and federated users to access billing information and (2) you have the required IAM permissions.FollowComment"
"Permissions error message displayed when attempting to close my AWS Account, as the Root User"
https://repost.aws/questions/QUJzWirdVpRhidy1mfH78N2g/permissions-error-message-displayed-when-attempting-to-close-my-aws-account-as-the-root-user
false
"0Hello there,The Payer account of your organization may have setup a Security Control policy denying all attempts to close a linked account - even using Root user.Please contact the administrator of your payer account and ask for closing your linked account using the OrganizationAccessRole or to remove the SCP for your account.You can see the details here : https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_close.html#orgs_close_account_policy_examples that prevents the account closure.hope this helps.CommentShareNithin Chandran Ranswered 20 days ago"
My Message routing destination is web service on EKS . After updating EKS version(change Message routing destination Domain name resolution ). the Message routing is not working on AWS IOT core. Can you help me check?FollowComment
Message routing is not working on AWS IOT core
https://repost.aws/questions/QUN9rFB9t4SEyTCJ7oW3YhJA/message-routing-is-not-working-on-aws-iot-core
false
"This was working fine till yesterday and suddenly my component stopped working when I did a revise deploymentGetting error ModuleNotFoundError: No module named '_awscrt'.On my component artifacts-unarchived location I can see the python packages are availableI see the packages folder awscrt, awsiot, boto3, botocore all packages that needed but when I try importfrom awsiot.greengrasscoreipc.clientv2 import GreengrassCoreIPCClientV2, getting ModuleNotFoundError: No module named '_awscrt'I am using the awsiotsdk==1.9.2, I also tried installing awscrt==0.13.3, awsiot==0.1.3 but no luck still getting the error ModuleNotFoundError: No module named '_awscrt'.Need advise to fix this issue.FollowComment"
ModuleNotFoundError: No module named '_awscrt' - aws greengrass
https://repost.aws/questions/QUVuHt6ajvT8qqfwL36BOkgA/modulenotfounderror-no-module-named-awscrt-aws-greengrass
false
"1HelloI understand that, you are getting error 'ModuleNotFoundError' with an error message 'No module named _awscrt' when you revised a deployment which was working fine before . You can see the python packages are available in your component artifacts-unarchived location in packages folder awscrt, awsiot, boto3, botocore. However, when you try to import from awsiot.greengrasscoreipc.clientv2 import GreengrassCoreIPCClientV2, you are observing the above mentioned error.Can you please perform either of the following to see if the mentioned issue is mitigated:Try installing the package as a 'ggc_user" directly which ensures that whatever was missing from the artifact is now being picked up from the ggc_user site-packages.===sudo -u ggc_user python3 -m pip install awsiotsdkYou can also specify the installation of necessary packages/libraries in the lifecycle section[1] of your component's recipe.===Lifecycle:Install:RequiresPrivilege: trueScript: |python3 -m pip install awsiotsdkPlease look into the below custom component's recipe in the below workshop link for your reference :[+] https://catalog.workshops.aws/greengrass/en-US/chapter5-pubsubipc/10-step1References:[1] AWS IoT Greengrass component recipe reference - https://docs.aws.amazon.com/greengrass/v2/developerguide/component-recipe-reference.htmlCommentShareAws_user171999answered 3 months ago0Thanks the mitigation worked for us, the actual problem is on the CI pipeline where there were few modifications on the component configurations and it started failing. Now its been fixed and workingCommentSharerePost-User-6182476answered 2 months ago"
"Hello,It appears that the rate expression that allows specification of a trigger time as a cron expression is based on the UTC timezone[1].I need to trigger a Lambda function every morning at 9am Paris, France, time, but because of Daylight Saving Time I'm currently forced to update the cron expression manually twice a year. If I could specify a timezone such as "Europe/Paris" I wouldn't have this problem as I guess the cron-equivalent is based on the 'server' timezones and would choose the correct time taking into account DST.As it doesn't seem possible at the moment (but I'm happy to learn in case it is!), I'd like to submit this request to provide a configuration option that allows to specify a timezone. It should support the "Zone name used in value of TZ environment variable" ("America/Vancouver"), but could also allow for UTC offsets ("UTC+3") and similar.Thanks for considering this request,Jakob.[1] https://docs.aws.amazon.com/lambda/latest/dg/tutorial-scheduled-events-schedule-expressions.htmlFollowComment"
Request: Specify timezone for Lambda rate expression
https://repost.aws/questions/QUoo13AvdIRuqq4hpIL8jKjg/request-specify-timezone-for-lambda-rate-expression
false
"0If you’re using the Serverless framework, there is a plug in for that.https://github.com/UnitedIncome/serverless-local-scheduleCommentSharejelderanswered 5 years agoD Laudams 10 months agoThe problem with that approach is it fails every time there is an adjustment to the time zone rules.Share0Thanks for the pointer to the serverless plugin! I am indeed using serverless so that is exactly what will solve my problem. That being said and looking at the way the plugin manages the DST period, it is a clever workaround, but still just a workaround. But it's great it exists! :-)CommentShareJakob Fixanswered 5 years ago"
Here we specify RDS for PostgreSQL extensions: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.FeatureSupport.Extensions.96xDo we have similar documentation for Aurora for PostgresSQL? Or we support the same extension list?FollowComment
Aurora for PostgreSQL plugin list
https://repost.aws/questions/QUsbUOD59TTFifU9v0QyygRQ/aurora-for-postgresql-plugin-list
true
0Accepted AnswerFound it. The same list of extensions based on https://aws.amazon.com/rds/aurora/faqs/CommentShareAWS-User-9314474answered 6 years ago
I need to limit the number of connections to the source DB and seem to recall I had set this value in the .cfg file mentioned. I updated and it looks like I lost that settings.FollowComment
Where are the options supported by AWS Schema Conversion Tool.cfg?
https://repost.aws/questions/QUKoaairImQjC_ZLoTBao7LQ/where-are-the-options-supported-by-aws-schema-conversion-tool-cfg
true
0Accepted AnswerSupport pointed me toward the $home\AWS Schema Conversion Tool\settings.xml file in my user directory. This settings is adjustable in that file.CommentSharerePost-User-5142735answered 7 months ago-1AWS SCT makes a single connection to the source by design.Can you elaborate how did you find multiple connections being made by SCT to source database for a single project?CommentShareD-Raoanswered 7 months agorePost-User-5142735 7 months agoWhen you launch the tool in debug mode it outputs a list of settings. One of them is the following:"source.jdbc.connections.max=20"I can't figure out the correct syntax in the config file to change that value.Share
"Hello,I'm attempting to use pinpoint to mass send out voice messages to parents of a school district, but I'm finding the quotas confusing. If there a limit of a max of 20 calls per minute, even with a production account (ie: non-sandbox)?This page makes it sound like the 20 calls is a limit of sandbox accounts:https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-voice-sandbox.htmlbut this page make it look like that is a limit on production accounts:https://docs.aws.amazon.com/pinpoint/latest/developerguide/quotas.html#quotas-voice(but it also states you can send 1 call per minute, but only for the first 20 seconds, as there is a max of 20 per minute then right?)To me, 20 calls/minute seems very restrictive for an AWS product doesn't it? Is pinpoint the wrong tool for the job if I want an application to mass send (I was thinking like a few hundred a minute via lambda) automated TTS calls to our students?FollowComment"
Pinpoint Voice Quota Clarification
https://repost.aws/questions/QUJYdr79geRMWHKaRjzXnR_g/pinpoint-voice-quota-clarification
false
"0Hello,I understand that you are currently in the process of using Amazon Pinpoint to create a mass calling system and would like further clarification on Pinpoint's Voice quotas. Referring back to the documentation you linked, it appears that 1) the default limit for a sandbox account is 20 messages per day, 2) whereas for a non-sandbox account, the limit would be 20 messages per minute and is not eligible for increase. If you require hundreds of calls per minute, then you may want to consider using a different AWS service for your TTS calls.Please let me know if this response helps or if you have any questions.CommentShareJessica_Canswered 10 months agoSUPPORT ENGINEERAWS_SamMreviewed 10 months ago"
"I couldn't see one, but something like the CLI aws ec2 describe-vpn-connections command would be handy.I'd like to extract the tunnel outside IP addresses, so that I can set them for a test harness VPC.Thank you,GaryFollowComment"
Is there a method/way to describe EC2 VPN connections using CDK?
https://repost.aws/questions/QUXT4VMhcLSgSg8XYpku3p6w/is-there-a-method-way-to-describe-ec2-vpn-connections-using-cdk
true
"1Accepted AnswerI also checked the methods and could not find anything that could confirm the IP address.It is likely that the CDK will not be able to see it either, as there does not seem to be anything in CloudFormation that outputs an external IP in the return value.CommentShareEXPERTRiku_Kobayashianswered a month agogary a month agoThanks Riku, I can use bash to string things togetherSharegary a month agoHmm, that 1 value is required for a VPN customer gateway file and a security group, which I now need to edit using a mix of AWS CLI and SSH in a bash script. Not so nice looking now. I'll check if it can be added.Share"
Can we connect RDS based SQL Database by putty. If possible please provide me the steps or any other way we can connect.ThanksFollowComment
can we connect RDS based SQL Database by putty
https://repost.aws/questions/QUoXCzucEpSAOYoFVUKxZgYw/can-we-connect-rds-based-sql-database-by-putty
true
"0Accepted AnswerRDS is not available to the world by default. It's also generally a bad idea to allow access to the RDS from anywhere except from inside your VPC. I recommend you do the following:Create a security group that allows access to the RDS over port 3306 from your EC2 instanceVisit https://console.aws.amazon.com/ec2/home#s=SecurityGroups and create a new security group.Switch to the inbound tab and choose MYSQL from the dropdown.Erase the 0.0.0.0/0 in the source field then click the input field. It will present you with a list of existing security groups. Choose the one that your EC2 instance belongs to.Click the apply rule changes buttonAssign the security group to your RDSVisit https://console.aws.amazon.com/rds/home#dbinstances: and select your RDS instance and under the Instance Actions menu select ModifyChange the RDS security group to the one you just createdMake sure to select the Apply immediately option at the bottom of this pageClick Continue and apply the new changes. (the change can sometimes take a couple of minutes)SSH into your EC2 instance then run the mysql command in your questionCommentSharerePost-VivekPophaleanswered 9 months agorePost-User-3274391 9 months agoThat's really helpful Thank youShare0Excellent thank you.But if i don't want to manage the database let it do by AWS means I want to rely with RDS service. Please let me know how to do database related operations like creation, modification, insertion in tables.CommentSharerePost-User-3274391answered 9 months agoIndranil Banerjee AWS EXPERT9 months agoTypically you will be doing database adminstration commands like database table creation, indexes creation, tables modification etc. using some tools that will vary depending on the database flavor (MySQL, Postgres, MSSQL Server, Oracle etc.). You should be able to use whatever tool you are currently using. Just need to install that tool on a machine that can talk to your RDS instance.For inserting data, updating data or running queries on the database, typically customers use applications that connect to the database using drivers like JDBC, ODBC etc. You can continue doing what you do nowShareIndranil Banerjee AWS EXPERT9 months agoRefer to these -https://aws.amazon.com/premiumsupport/knowledge-center/rds-common-dba-tasks/https://aws.amazon.com/blogs/database/common-administrator-responsibilities-on-amazon-rds-and-aurora-for-postgresql-databases/https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Tutorials.html - in particular look at the section Tutorials and sample code in GitHubShareIndranil Banerjee AWS EXPERT9 months agoIn response to your comment - "But if i don't want to manage the database let it do by AWS means I want to rely with RDS service"It is important to note what AWS will take care of when you use the RDS service. The activities that AWS will take care of on your behalf are as mentioned in the "What does Amazon RDS manage on my behalf?" section of the RDS FAQs - https://aws.amazon.com/rds/faqs/You will still be responsible for creating tables, indexes, stored procedures and other such database objects as well as responsible for inserts/updates/queries of data on the database tables.Share0If your RDS database is public, then you should be able to connect to it using whatever database tool you use to connect to your on-prem databases, such as pgAdmin for postgres or mySQL workbench for mySQL etc.From a security point of view, it is not recommended to keep your RDS databases public. You should create your RDS database inside a private subnet inside a VPC that you create. The security group of the RDS database should only allow inbound access on the port that your database server listens on, only from another public subnet in the VPC.You can then create an EC2 machine in the public subnet of your VPC that will act as a bastion host. You can install the database tools on this EC2 machine.You can use putty to ssh into the bastion host as shown here - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.htmlOnce you have been able to ssh into the bastion host, you can use the database tool like pgAdmin to connect to the RDS database. For more details, refer to this AWS Support article - https://aws.amazon.com/premiumsupport/knowledge-center/rds-connect-ec2-bastion-host/CommentShareEXPERTIndranil Banerjee AWSanswered 9 months agorePost-User-3274391 9 months agoExcellent thank you.But if i don't want to manage the database let it do by AWS means I want to rely with RDS service. Please let me know how to do database related operations like creation, modification, insertion in tables.Share"
"Hi,Can we create custom connector in Appflow to interact with the lambda function(destination), keeping Salesforce as source? I couldn't find any resource to create custom connector to interact with AWS Lambda Function.Here's my usecase:I want to poll the data from salesforce db and process the data using lambda function. I want to use Appflow to connect salesforce and AWS lambda.If you know better approach, please suggest.Thanks.FollowComment"
Appflow custom connector to connect to AWS Lambda Function
https://repost.aws/questions/QUixVDZ-r6TvWne0bfFjdpUA/appflow-custom-connector-to-connect-to-aws-lambda-function
false
"0Hi - Thanks for checking. Have checked this blog Building custom connectors using the Amazon AppFlow Custom Connector SDK ? It has the following high level stepsCreate a custom connector as an AWS Lambda function using the Amazon AppFlow Custom Connector SDK.Deploy the custom connector Lambda function, which provides the serverless compute for the connector.Lambda function integrates with a SaaS application or private API.Register the custom connector with Amazon AppFlow.Users can now use this custom connector in the Amazon AppFlow service.CommentShareEXPERTAWS-User-Nitinanswered 2 months ago"
"Hi,I have a design which split into two modules and I need to load each module into seperage FPGAs. (in this case 2 FPGA's).Without AWS, I would have used two FPGA boards each loaded with a respective image and both are connected/routed through the FMC connecter.How to achieve the same using AWS F1.4xlarge (which has two FPGA's in one group)?Regards,VenkatFollowComment"
How to implement a design which spans across two FPGAs?
https://repost.aws/questions/QU76Az23LPRguiEB8QVdSDKg/how-to-implement-a-design-which-spans-across-two-fpgas
false
"0Dear customerThank you for reaching out to aws. In this case, you need to partition the design into the CL region of two FPGAs, very similar to what you would have done in a non-F1 case, the difference being with F1, the shell part of the design that involves PCIe interface, basic DDR-C etc are by default generated for you as part of the shell. The P2P communication between the two FPGAs is through PCIe. Please refer to https://github.com/awslabs/aws-fpga-app-notes/tree/master/Using-PCIe-Peer2Peer for further guidance on using PCIe P2P.ThanksCommentShareAWS-User-7905434answered 2 years ago0Hi Kishoreataws,Thanks for the response.As you mentioned the communication needs to happen through PCIe, does that mean there is not Wire connection between the two FPGAs?In other words, is it possible a continuous stream of data (signals) between the FPGAs?Regards,VenkatCommentSharevenkubanswered 2 years ago0Dear customerThanks for the follow up. Thats correct, PCIe path is the only link available for communication between the FPGAs. There is no direct pin connections between the FPGAsThanksCommentShareAWS-User-7905434answered 2 years ago0thanks for the response.CommentSharevenkubanswered 2 years ago0Hi Kishore,I looked at the APP Note -Using-PCIe-Peer2Peer and understand how to communicate between two FPGAs.Is there a verilog/system verilog testbench env with a testcase or so, so that I can try this before building FPGA image.?Please point to any customer provided/created or AWS created testbench env, if any available.I guess this would help for me to make sure my client logic also of error free.Appreciate your help.Regards,VenkatCommentSharevenkubanswered 2 years ago0Dear customerThank you for the follow up. For the peer 2 peer communication between the FPGAs use case you are interested, what is the data rates that you are targeting?Regarding the test bench, the addresses can just be mapped to host memory and see if that works and other FPGA access should also work because all it is doing is accessing some external address. Peer-to-Peer transfer is essentially accessing destination FPGA's PF0-BAR4 address space. Therefore you'll need a test to verify if CL in source FPGA is capable of generating those addresses. For simulations, you may define an address space in host memory and verify if CL can access that space. The Shell BFM provided in aws-fpga TB supports this. Please feel free to contact aws if you have any follow up questions.ThanksCommentShareAWS-User-7905434answered 2 years ago"
"A static website hosted in S3, served via CloudFront. Now, the website URL of dev environment is accessible over the internet by anyone, which seem to be a security risk. For that, am planning to enable Users authentication with Okta/ Cognito in the next phase.In the meantime, have tried some workarounds like (1) restricting the application access with IP address/range, which is impossible because our users are accessing from AWS Workspace (dynamic IP range), (2) restricting with IAM user/role, which is also impossible because we do not have privileges to manage the IAM.Apart from above, what are the possible alternatives to protect the application from anonymous access?Also, I am not sure whether it is a severe application security issue. By any chance, leaving the website open to public access prone to Cross-Site Scripting (XSS) attacks or any other security threats?FollowComment"
Secure Static Website From Public Exposure
https://repost.aws/questions/QUIWvVZQSLQ3mzXiOsEmcotw/secure-static-website-from-public-exposure
false
"0Hi cloudarch,You could look for these options:Enable WAF on CloudFront. At least it will prevent certain malicious XSS script attack. You can leverage default manage rules, block countries and more: https://www.wellarchitectedlabs.com/security/200_labs/200_cloudfront_with_waf_protection/A quick temporary win can be to leverage CloudFront functions and or Lambda@Edge to perform some lightweight authentication such as Basic Auth, where you share “beta” credentials to your users and check those. This is an example: https://gist.github.com/lmakarov/e5984ec16a76548ff2b278c06027f1a4.hope above helps youCommentShareEXPERTalatechanswered 4 months ago0If you had to restrict based on IP address (not something I'd normally recommend; but in this case it's probably suitable):Normally Workspaces instance access the internet via a NAT Gateway in the VPC that the instances are running. That NAT Gateway has a static IP address so it would be reasonably easy to work with that.CommentShareEXPERTBrettski-AWSanswered 4 months ago"
"Hello,i'm on this issue since yesterday without any means to understand what's going wrong.I install awscli via pip, setup my credentials, and now i'm able to retrieve my the whole list of logs hosted on Cloudwatch: { "arn": "arn:aws:logs:eu-west-1:077720739816:log-group:testloginstance:*", "creationTime": 1563357673443, "metricFilterCount": 0, "logGroupName": "testloginstance", "storedBytes": 129594 },But when i try to display logs, i always have the same error:An error occurred (InvalidParameterException) when calling the GetLogEvents operation: 1 validation error detected: Value 'log-group:testloginstance' at 'logStreamName' failed to satisfy constraint: Member must satisfy regular expression patternHere my command and debug output:aws logs --debug get-log-events --log-group-name testloginstance --log-stream-name log-group:testloginstance2019-08-21 12:14:16,298 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/1.16.221 Python/2.7.16 Linux/3.10.0-514.2.2.el7.x86_64 botocore/1.12.2122019-08-21 12:14:16,298 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['logs', '--debug', 'get-log-events', '--log-group-name', 'testloginstance', '--log-stream-name', 'log-group:testloginstance']2019-08-21 12:14:16,299 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function add_scalar_parsers at 0x7f5ee91bd050>2019-08-21 12:14:16,299 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function register_uri_param_handler at 0x7f5ee96b95f0>2019-08-21 12:14:16,299 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function inject_assume_role_provider_cache at 0x7f5ee9692668>2019-08-21 12:14:16,300 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function attach_history_handler at 0x7f5eeda0c398>2019-08-21 12:14:16,301 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /opt/rh/python27/root/usr/lib/python2.7/site-packages/botocore/data/logs/2014-03-28/service-2.json2019-08-21 12:14:16,323 - MainThread - botocore.hooks - DEBUG - Changing event name from building-command-table.logs to building-command-table.cloudwatch-logs2019-08-21 12:14:16,323 - MainThread - botocore.hooks - DEBUG - Event building-command-table.cloudwatch-logs: calling handler <function add_waiters at 0x7f5ee91c5410>2019-08-21 12:14:16,332 - MainThread - awscli.clidriver - DEBUG - OrderedDict([(u'log-group-name', <awscli.arguments.CLIArgument object at 0x7f5ee8f2d550>), (u'log-stream-name', <awscli.arguments.CLIArgument object at 0x7f5ee8f2d5d0>), (u'start-time', <awscli.arguments.CLIArgument object at 0x7f5ee8f2d250>), (u'end-time', <awscli.arguments.CLIArgument object at 0x7f5ee8f2d650>), (u'next-token', <awscli.arguments.CLIArgument object at 0x7f5ee8f2d690>), (u'limit', <awscli.arguments.CLIArgument object at 0x7f5ee8f2d6d0>), (u'start-from-head', <awscli.arguments.BooleanArgument object at 0x7f5ee8f2d710>), (u'no-start-from-head', <awscli.arguments.BooleanArgument object at 0x7f5ee8f2d7d0>)])2019-08-21 12:14:16,332 - MainThread - botocore.hooks - DEBUG - Changing event name from building-argument-table.logs.get-log-events to building-argument-table.cloudwatch-logs.get-log-events2019-08-21 12:14:16,333 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.cloudwatch-logs.get-log-events: calling handler <function add_streaming_output_arg at 0x7f5ee91bd500>2019-08-21 12:14:16,333 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.cloudwatch-logs.get-log-events: calling handler <function add_cli_input_json at 0x7f5ee969b398>2019-08-21 12:14:16,333 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.cloudwatch-logs.get-log-events: calling handler <function unify_paging_params at 0x7f5ee923dde8>2019-08-21 12:14:16,340 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /opt/rh/python27/root/usr/lib/python2.7/site-packages/botocore/data/logs/2014-03-28/paginators-1.json2019-08-21 12:14:16,341 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.cloudwatch-logs.get-log-events: calling handler <function add_generate_skeleton at 0x7f5ee9232758>2019-08-21 12:14:16,341 - MainThread - botocore.hooks - DEBUG - Changing event name from before-building-argument-table-parser.logs.get-log-events to before-building-argument-table-parser.cloudwatch-logs.get-log-events2019-08-21 12:14:16,342 - MainThread - botocore.hooks - DEBUG - Event before-building-argument-table-parser.cloudwatch-logs.get-log-events: calling handler <bound method CliInputJSONArgument.override_required_args of <awscli.customizations.cliinputjson.CliInputJSONArgument object at 0x7f5ee8f2d350>>2019-08-21 12:14:16,342 - MainThread - botocore.hooks - DEBUG - Event before-building-argument-table-parser.cloudwatch-logs.get-log-events: calling handler <bound method GenerateCliSkeletonArgument.override_required_args of <awscli.customizations.generatecliskeleton.GenerateCliSkeletonArgument object at 0x7f5ee8f2da90>>2019-08-21 12:14:16,343 - MainThread - botocore.hooks - DEBUG - Changing event name from operation-args-parsed.logs.get-log-events to operation-args-parsed.cloudwatch-logs.get-log-events2019-08-21 12:14:16,343 - MainThread - botocore.hooks - DEBUG - Changing event name from load-cli-arg.logs.get-log-events.log-group-name to load-cli-arg.cloudwatch-logs.get-log-events.log-group-name2019-08-21 12:14:16,344 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.cloudwatch-logs.get-log-events.log-group-name: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7f5ee90d3050>2019-08-21 12:14:16,344 - MainThread - botocore.hooks - DEBUG - Changing event name from process-cli-arg.logs.get-log-events to process-cli-arg.cloudwatch-logs.get-log-events2019-08-21 12:14:16,344 - MainThread - botocore.hooks - DEBUG - Event process-cli-arg.cloudwatch-logs.get-log-events: calling handler <awscli.argprocess.ParamShorthandParser object at 0x7f5ee966f090>2019-08-21 12:14:16,344 - MainThread - awscli.arguments - DEBUG - Unpacked value of u'testloginstance' for parameter "log_group_name": u'testloginstance'2019-08-21 12:14:16,344 - MainThread - botocore.hooks - DEBUG - Changing event name from load-cli-arg.logs.get-log-events.log-stream-name to load-cli-arg.cloudwatch-logs.get-log-events.log-stream-name2019-08-21 12:14:16,344 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.cloudwatch-logs.get-log-events.log-stream-name: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7f5ee90d3050>2019-08-21 12:14:16,344 - MainThread - botocore.hooks - DEBUG - Changing event name from process-cli-arg.logs.get-log-events to process-cli-arg.cloudwatch-logs.get-log-events2019-08-21 12:14:16,344 - MainThread - botocore.hooks - DEBUG - Event process-cli-arg.cloudwatch-logs.get-log-events: calling handler <awscli.argprocess.ParamShorthandParser object at 0x7f5ee966f090>2019-08-21 12:14:16,345 - MainThread - awscli.arguments - DEBUG - Unpacked value of u'log-group:testloginstance' for parameter "log_stream_name": u'log-group:testloginstance'2019-08-21 12:14:16,345 - MainThread - botocore.hooks - DEBUG - Changing event name from load-cli-arg.logs.get-log-events.start-time to load-cli-arg.cloudwatch-logs.get-log-events.start-time2019-08-21 12:14:16,345 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.cloudwatch-logs.get-log-events.start-time: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7f5ee90d3050>2019-08-21 12:14:16,345 - MainThread - botocore.hooks - DEBUG - Changing event name from load-cli-arg.logs.get-log-events.end-time to load-cli-arg.cloudwatch-logs.get-log-events.end-time2019-08-21 12:14:16,345 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.cloudwatch-logs.get-log-events.end-time: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7f5ee90d3050>2019-08-21 12:14:16,345 - MainThread - botocore.hooks - DEBUG - Changing event name from load-cli-arg.logs.get-log-events.next-token to load-cli-arg.cloudwatch-logs.get-log-events.next-token2019-08-21 12:14:16,345 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.cloudwatch-logs.get-log-events.next-token: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7f5ee90d3050>2019-08-21 12:14:16,346 - MainThread - botocore.hooks - DEBUG - Changing event name from load-cli-arg.logs.get-log-events.limit to load-cli-arg.cloudwatch-logs.get-log-events.limit2019-08-21 12:14:16,346 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.cloudwatch-logs.get-log-events.limit: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7f5ee90d3050>2019-08-21 12:14:16,346 - MainThread - botocore.hooks - DEBUG - Changing event name from load-cli-arg.logs.get-log-events.start-from-head to load-cli-arg.cloudwatch-logs.get-log-events.start-from-head2019-08-21 12:14:16,346 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.cloudwatch-logs.get-log-events.start-from-head: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7f5ee90d3050>2019-08-21 12:14:16,346 - MainThread - botocore.hooks - DEBUG - Changing event name from load-cli-arg.logs.get-log-events.cli-input-json to load-cli-arg.cloudwatch-logs.get-log-events.cli-input-json2019-08-21 12:14:16,346 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.cloudwatch-logs.get-log-events.cli-input-json: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7f5ee90d3050>2019-08-21 12:14:16,346 - MainThread - botocore.hooks - DEBUG - Changing event name from load-cli-arg.logs.get-log-events.generate-cli-skeleton to load-cli-arg.cloudwatch-logs.get-log-events.generate-cli-skeleton2019-08-21 12:14:16,346 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.cloudwatch-logs.get-log-events.generate-cli-skeleton: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7f5ee90d3050>2019-08-21 12:14:16,347 - MainThread - botocore.hooks - DEBUG - Changing event name from calling-command.logs.get-log-events to calling-command.cloudwatch-logs.get-log-events2019-08-21 12:14:16,347 - MainThread - botocore.hooks - DEBUG - Event calling-command.cloudwatch-logs.get-log-events: calling handler <bound method CliInputJSONArgument.add_to_call_parameters of <awscli.customizations.cliinputjson.CliInputJSONArgument object at 0x7f5ee8f2d350>>2019-08-21 12:14:16,347 - MainThread - botocore.hooks - DEBUG - Event calling-command.cloudwatch-logs.get-log-events: calling handler <bound method GenerateCliSkeletonArgument.generate_json_skeleton of <awscli.customizations.generatecliskeleton.GenerateCliSkeletonArgument object at 0x7f5ee8f2da90>>2019-08-21 12:14:16,347 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: env2019-08-21 12:14:16,347 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: assume-role2019-08-21 12:14:16,347 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: assume-role-with-web-identity2019-08-21 12:14:16,347 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: shared-credentials-file2019-08-21 12:14:16,348 - MainThread - botocore.credentials - INFO - Found credentials in shared credentials file: ~/.aws/credentials2019-08-21 12:14:16,348 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /opt/rh/python27/root/usr/lib/python2.7/site-packages/botocore/data/endpoints.json2019-08-21 12:14:16,425 - MainThread - botocore.hooks - DEBUG - Event choose-service-name: calling handler <function handle_service_name_alias at 0x7f5eea84be60>2019-08-21 12:14:16,428 - MainThread - botocore.hooks - DEBUG - Event creating-client-class.cloudwatch-logs: calling handler <function add_generate_presigned_url at 0x7f5eea89d848>2019-08-21 12:14:16,428 - MainThread - botocore.args - DEBUG - The s3 config key is not a dictionary type, ignoring its value of: None2019-08-21 12:14:16,433 - MainThread - botocore.endpoint - DEBUG - Setting logs timeout as (60, 60)2019-08-21 12:14:16,434 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /opt/rh/python27/root/usr/lib/python2.7/site-packages/botocore/data/_retry.json2019-08-21 12:14:16,438 - MainThread - botocore.client - DEBUG - Registering retry handlers for service: logs2019-08-21 12:14:16,439 - MainThread - botocore.hooks - DEBUG - Event before-parameter-build.cloudwatch-logs.GetLogEvents: calling handler <function generate_idempotent_uuid at 0x7f5eea84c140>2019-08-21 12:14:16,440 - MainThread - botocore.hooks - DEBUG - Event before-call.cloudwatch-logs.GetLogEvents: calling handler <function inject_api_version_header_if_needed at 0x7f5eea84d7d0>2019-08-21 12:14:16,440 - MainThread - botocore.endpoint - DEBUG - Making request for OperationModel(name=GetLogEvents) with params: {'body': '{"logStreamName": "log-group:testloginstance", "logGroupName": "testloginstance"}', 'url': u'https://logs.eu-west-1.amazonaws.com/', 'headers': {'User-Agent': 'aws-cli/1.16.221 Python/2.7.16 Linux/3.10.0-514.2.2.el7.x86_64 botocore/1.12.212', 'Content-Type': u'application/x-amz-json-1.1', 'X-Amz-Target': u'Logs_20140328.GetLogEvents'}, 'context': {'auth_type': None, 'client_region': 'eu-west-1', 'has_streaming_input': False, 'client_config': <botocore.config.Config object at 0x7f5ee8bc0950>}, 'query_string': '', 'url_path': '/', 'method': u'POST'}2019-08-21 12:14:16,440 - MainThread - botocore.hooks - DEBUG - Event request-created.cloudwatch-logs.GetLogEvents: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f5ee8bc0910>>2019-08-21 12:14:16,441 - MainThread - botocore.hooks - DEBUG - Event choose-signer.cloudwatch-logs.GetLogEvents: calling handler <function set_operation_specific_signer at 0x7f5eea84c050>2019-08-21 12:14:16,441 - MainThread - botocore.auth - DEBUG - Calculating signature using v4 auth.2019-08-21 12:14:16,442 - MainThread - botocore.auth - DEBUG - CanonicalRequest:POST/content-type:application/x-amz-json-1.1host:logs.eu-west-1.amazonaws.comx-amz-date:20190821T121416Zx-amz-target:Logs_20140328.GetLogEventscontent-type;host;x-amz-date;x-amz-target5805b397761c476c536330c368cacca09997d48cfbce5db67ed1f8aa234d614a2019-08-21 12:14:16,442 - MainThread - botocore.auth - DEBUG - StringToSign:AWS4-HMAC-SHA25620190821T121416Z20190821/eu-west-1/logs/aws4_request1483a200291d08b0125fc0daebd2b48d6f722339577c0603f7ced0b590d678632019-08-21 12:14:16,442 - MainThread - botocore.auth - DEBUG - Signature:a36784e173f011cb88c5370239f99a99998af6974b6736897590501a7625dd892019-08-21 12:14:16,442 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=POST, url=https://logs.eu-west-1.amazonaws.com/, headers={'Content-Length': '81', 'X-Amz-Target': 'Logs_20140328.GetLogEvents', 'X-Amz-Date': '20190821T121416Z', 'User-Agent': 'aws-cli/1.16.221 Python/2.7.16 Linux/3.10.0-514.2.2.el7.x86_64 botocore/1.12.212', 'Content-Type': 'application/x-amz-json-1.1', 'Authorization': 'AWS4-HMAC-SHA256 Credential=XXX/20190821/eu-west-1/logs/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-target, Signature=a36784e173f011cb88c5370239f99a99998af6974b6736897590501a7625dd89'}>2019-08-21 12:14:16,444 - MainThread - urllib3.util.retry - DEBUG - Converted retries value: False -> Retry(total=False, connect=None, read=None, redirect=0, status=None)2019-08-21 12:14:16,444 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): logs.eu-west-1.amazonaws.com:4432019-08-21 12:14:16,927 - MainThread - urllib3.connectionpool - DEBUG - https://logs.eu-west-1.amazonaws.com:443 "POST / HTTP/1.1" 400 2172019-08-21 12:14:16,928 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amzn-RequestId': '96e94edc-f5db-4320-9f82-f92d11b8800e', 'Date': 'Wed, 21 Aug 2019 12:14:16 GMT', 'Content-Length': '217', 'Content-Type': 'application/x-amz-json-1.1', 'Connection': 'close'}2019-08-21 12:14:16,928 - MainThread - botocore.parsers - DEBUG - Response body:{"__type":"InvalidParameterException","message":"1 validation error detected: Value 'log-group:testloginstance' at 'logStreamName' failed to satisfy constraint: Member must satisfy regular expression pattern: [^:*]*"}2019-08-21 12:14:16,929 - MainThread - botocore.hooks - DEBUG - Event needs-retry.cloudwatch-logs.GetLogEvents: calling handler <botocore.retryhandler.RetryHandler object at 0x7f5ee8b42a10>2019-08-21 12:14:16,929 - MainThread - botocore.retryhandler - DEBUG - No retry needed.2019-08-21 12:14:16,931 - MainThread - awscli.clidriver - DEBUG - Exception caught in main()Traceback (most recent call last): File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/awscli/clidriver.py", line 217, in main return command_table[parsed_args.command](remaining, parsed_args) File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/awscli/clidriver.py", line 358, in __call__ return command_table[parsed_args.operation](remaining, parsed_globals) File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/awscli/clidriver.py", line 530, in __call__ call_parameters, parsed_globals) File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/awscli/clidriver.py", line 650, in invoke client, operation_name, parameters, parsed_globals) File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/awscli/clidriver.py", line 662, in _make_client_call **parameters) File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/botocore/client.py", line 357, in _api_call return self._make_api_call(operation_name, kwargs) File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/botocore/client.py", line 661, in _make_api_call raise error_class(parsed_response, operation_name)InvalidParameterException: An error occurred (InvalidParameterException) when calling the GetLogEvents operation: 1 validation error detected: Value 'log-group:testloginstance' at 'logStreamName' failed to satisfy constraint: Member must satisfy regular expression pattern: [^:*]*2019-08-21 12:14:16,932 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255An error occurred (InvalidParameterException) when calling the GetLogEvents operation: 1 validation error detected: Value 'log-group:testloginstance' at 'logStreamName' failed to satisfy constraint: Member must satisfy regular expression pattern: [^:*]*I especially try to take a simple log ressource name, without special characters, trying several syntax, same result or API response that this log doesn't exist.Any help would be greatly appreciated.ThxEdited by: nicnictout on Aug 21, 2019 5:50 AMFollowComment"
AWS CLI error: Member must satisfy regular expression pattern
https://repost.aws/questions/QUVIebbI-tSYWTpIjh0OG4-A/aws-cli-error-member-must-satisfy-regular-expression-pattern
false
"0Hi,Change your invocationFrom:aws logs get-log-events --log-group-name testloginstance --log-stream-name log-group:testloginstanceTo:aws logs get-log-events --log-group-name testloginstance --log-stream-name testloginstanceFYI: I was able to reproduce the same error.C:\Users\randy\aws\sam\projects\nodejs>aws logs get-log-events --log-group-name CloudWatchLogGroup --log-stream-name log-group::MyLogStreamAn error occurred (InvalidParameterException) when calling the GetLogEvents operation: 1 validation error detected: Value 'log-group::MyLogStream' at 'logStreamName' failed to satisfy constraint: Member must satisfy regular expression pattern: [^:*]*And after I fixed the --log-stream-nameC:\Users\randy\aws\sam\projects\nodejs>aws logs get-log-events --log-group-name CloudWatchLogGroup --log-stream-name MyLogStream{ "events": [ { "timestamp": 1566284476276, "message": "[System] [Information] [7036] [Service Control Manager] [EC2AMAZ-7JBQ892] [The Microsoft Account Sign-in Assistant service entered the stopped state.]", "ingestionTime": 1566284525380 },Hope this helps!-randyCommentShareRandyTakeshitaanswered 4 years ago"
"Can I use AWS Device Farm to test applications on PCs i.e. Windows PC across Internet explorer, Chrome Browser, Edge Etc? Can I use this on Apple PCs to test on Safari ? What other platforms / browsers are supported apart from the mobile platforms listed in the documentations.is the product SOC compliant ? if not are there any mitigating controls to put in place for using this.FollowComment"
AWS Device Farm Capabilities
https://repost.aws/questions/QUEGCVget1RFWQY3wrF8JDxg/aws-device-farm-capabilities
true
"2Accepted AnswerDevice Farm supports running tests on browsers through Device Farm desktop browser testing. It supports Chrome, Firefox, IE, and MSFT Edge. You can refer hereIt does not support Safari nor MacOS.To date, the service is not in scope for SOC report available in AWS Artifact. You can find all the services in scope for AWS SOC report through Artifact in the AWS Management Console, or download the Fall 2021 SOC 3 report here.One way to mitigate is to run Selenium based testing on your own device farms based on services in scope for AWS SOC report, such as EC2.CommentShareJason_Sanswered a year ago"