Start Time,Last Updated,Originator,Unique Users,Unique Users Count,Summary,Message Count,Positive Topics,Negative Topics,Overall Sentiment 2023-08-18T05:14:01.807399,2023-08-18T09:00:33.134509,aganivada,"{'aganivada', 'mnirmal'}",2,"- The branch ""hotfix/0.0.1.1"" was not initially marked as protected. - Mnirmal requested Aganivada to mark the branch as protected. - Aganivada confirmed that the branch has been marked as protected. - Mnirmal encountered an error while trying to hotfix-finish on the metrics manager. - The error message indicated that Mnirmal was not allowed to push code to protected branches on the project. - Aganivada checked the issue and updated the main branch to allow developers to merge to it. - Mnirmal acknowledged the update and agreed to try again.",10,"['branch marked as protected', 'main branch updated']","['error while hotfix-finish', 'not allowed to push to protected branches']",POSITIVE,[] 2023-03-28T22:48:07.069559,2023-03-28T23:18:51.059979,ppant,"{'aganivada', 'ppant'}",2,"- The user ""ppant"" is asking if the ""terminal-core-stack"" has been removed from the ""base-infra"" core. - They cannot find any reference to it in the specified GitLab repository. - However, it is being imported in the ""cdk_app.py"" file, causing a local synthesis failure for core stacks. - The user ""aganivada"" suggests copying the file locally from a specific commit to test it. - ""ppant"" asks if they should commit this file to the develop branch along with changes for ""sqs lambda"". - ""aganivada"" explains that the file should automatically come to the develop branch when the release/0.9.9 branch is closed over the weekend. - They suggest waiting for the release process to bring this change to the develop branch.",4,"['terminal-core-stack', 'base-infra', 'cdk_app.py', 'sqs lambda']","['local synthesis failure', 'develop branch', 'release/0.9.9 branch']",,[] 2023-06-20T00:17:05.498079,2023-06-20T09:55:08.310219,pjha,"{'aganivada', 'pjha', 'rtaraniganty'}",3,"- The inbound rule was removed from the 'bastion-sg-int' security group. - To access the INT bastion, users need to connect to the VPN. - Bastion host access is only enabled for the p81 SG, and the source IP needs to be from the VPN. - There is a page that describes the process of connecting to the Perimeter 81 VPN, which was added by ppant.",7,"['bastion-sg-int', 'VPN', 'Perimeter 81 VPN']",[],,[] 2023-01-10T13:28:07.015369,2023-01-10T13:55:49.822239,rtaraniganty,"{'svummidi', 'rtaraniganty'}",2,"- The team is discussing the possibility of summarizing all the responses for a given open-text question in future releases. - Anair has expressed concerns about data privacy related to OpenAI. - The team is currently working on trying G2 comments summarization and testing it for scale. - They are considering using their own platform and releasing retro comments. - They want to confirm their ability to run the model in-house, even if they don't use it initially.",3,"['possibility of summarizing responses', 'working on G2 comments summarization', 'considering using own platform', 'confirming ability to run model in-house']",['concerns about data privacy'],UNKNOWN,[] 2023-06-18T22:20:24.202269,2023-06-19T08:24:30.751559,aganivada,"{'aganivada', 'rtaraniganty'}",2,"- There is an issue with the deployment of a DynamoDB table due to a change in the ID of an SSM parameter. - Some stacks have the old CDK key for the parameter, while others have the new key suffixed. - Deploying an old stack that already has the SSM parameter stored results in an error because the key already exists. - The suggestion of deleting the SSM parameter and redeploying is considered, but it may temporarily make the parameter unavailable. - Redeploying the service stacks is not necessary as the same SSM parameter will be added in the same path during the next deployment. - The reason for the change in the ID of the SSM parameter is not remembered, but it is seen as a good learning experience for future reviews.",9,"['deployment issue', 'DynamoDB table', 'SSM parameter', 'CDK key', 'learning experience']","['error', 'parameter unavailable']",NEUTRAL,[] 2023-03-30T23:32:53.213369,2023-04-05T04:33:36.005759,pjha,"{'aganivada', 'pjha'}",2,- The person named pjha is trying to deploy a tenancy with the route53 target as an alb endpoint. - They are encountering an error stating that the service already exists. - There is a concern about downtime if they choose to destroy and deploy the service. - They discuss the possibility of trying experiments or using another AWS account. - pjha tries modifying the service name but it doesn't work. - The only solution they can think of is to destroy and deploy the service again. - pjha confirms that deploying after destroying works.,16,"['deploying tenancy', 'route53 target', 'alb endpoint', 'trying experiments', 'using another AWS account', 'destroy and deploy again']","['error', 'service already exists', 'downtime', 'modifying service name']",,[] 2023-05-17T10:27:36.651589,2023-05-17T22:09:54.636269,rtaraniganty,"{'aganivada', 'pjha', 'askumar', 'rtaraniganty'}",4,"- The team needs to make upgrades to Opensearch ES from version 1.2 to 1.3 and then to version 2.x. - The team also plans to migrate from RDS to Aurora and use Aurora serverless. - The Aurora instance should be created with the same version as the one in production. - After creating the Aurora instance, it will be migrated to the latest version as a trial before attempting it in stage and production. - Shell scripts will be used for ES re-indexing after applying new mappings. - The Aurora version in the int environment is the same as the one in production (13.8).",4,"['Upgrading Opensearch ES', 'Migrating from RDS to Aurora', 'Using Aurora serverless', 'Creating Aurora instance with same version as production', 'Migrating Aurora instance to latest version as trial', 'Using shell scripts for ES re-indexing', 'Aurora version in int environment same as production']",[],,[] 2023-07-12T22:37:45.101559,2023-07-13T04:32:21.711659,aganivada,"{'aganivada', 'bganganna', 'araman'}",3,"- There is a request to update the default invitation expiry from 7 days to 30 days. - The change has been made and tested in the int environment. - The build has been deployed to the stage environment for validation. - Testing is required to verify that the TTL is updated to 30 days. - Collaborator invite flow is the most frequent flow to test, but customer inviting vendor/vendor inviting customer flows are also important. - The build has been deployed to the stage environment. - Testing has been done for customer and vendor collaborator flow, as well as customer inviting an existing vendor. - The change only affects the invitation flow, not the onboarding flow. - Testing the expiration of the invite is not necessary as it is controlled by auth0. - The HF has been deployed in prod, and the invitation's default TTL should now be 30 days instead of 7 days.",15,"['Request to update default invitation expiry', 'Change made and tested in int environment', 'Build deployed to stage environment for validation', 'Testing required to verify TTL update', 'Collaborator invite flow tested', 'Customer and vendor collaborator flow tested', 'Customer inviting existing vendor flow tested', 'Change only affects invitation flow', 'Expiration of invite controlled by auth0', 'HF deployed in prod', ""Invitation's default TTL now 30 days""]",[],,['PLAT-2304'] 2023-05-22T22:05:17.645079,2023-05-24T00:48:02.720559,rsrinivasan,"{'hchintamreddy', 'sjain', 'rsrinivasan', 'mnirmal', 'rtaraniganty', 'snangia'}",6,"- rsrinivasan's submitted answers did not show up in the pulse survey. - The pulse survey was closed after 9:30am. - The pulse survey manager (PMS) threw a 400 Bad Request error at 9:33am, indicating that the pulse had already expired. - The errors in PMS are only logged and not shown in the UI. - There may be a race condition causing the issue. - It is suggested to remove the check in PMS to accept responses only if the pulse is active and handle the expiry logic in SC. - The pulse instance should be marked as closed only after all user messages have been updated. - There is a check on the start pulse button to see if the pulse has expired. - The action ID parsing failed due to an assumption that the first component would be a number.",39,"['Pulse survey', 'Race condition', 'Handling expiry logic', 'Logging errors']","[""rsrinivasan's submitted answers not showing up"", 'Pulse survey closed early', '400 Bad Request error', 'Errors not shown in UI', 'Start pulse button check', 'Action ID parsing failed']",NEGATIVE,[] 2023-07-18T00:39:01.234589,2023-07-18T04:15:17.691419,akasim,"{'akasim', 'pjha', 'aganivada', 'askumar'}",4,"- There was a deployment failure in INT due to an error related to the AWS Service Discovery service. - It was mentioned that there might be a need for changes in the CDK (Cloud Development Kit) code. - The error message indicated that a service already exists, which could be causing the deployment failure. - There was a discussion about the latest code and whether the LB (Load Balancer) or ECS (Elastic Container Service) IP change was merged for the service. - It was confirmed that the code was merged to the develop branch. - The suggestion was made to pull the latest changes and try deploying again. - A specific deployment command was provided to try deploying the service. - There was a mention of an error related to RedisCacheManager and Redisson during context initialization. - It was noted that the deployment went through after pulling the latest changes, as the CDK changes were not present in the previous workspace.",13,"['deployment failure', 'CDK code changes', 'latest code merged', 'deployment success']","['AWS Service Discovery error', 'LB or ECS IP change', 'RedisCacheManager error', 'Redisson error']",POSITIVE,['COLL-2690'] 2023-07-05T10:26:17.016289,2023-07-05T22:13:28.306259,hchintamreddy,"{'aganivada', 'ppant', 'rtaraniganty', 'hchintamreddy'}",4,"Key Insights from the Discussion Thread: - The user is experiencing an exception with unit tests in the pulse manager after updating the axm-commons version. - The exception is related to the creation of the 'ssmMultiInstanceConfigClient' bean. - The nested exception is a NullPointerException with the message ""region must not be null"". - The NullPointerException is caused by the 'createSsmClient' method in the 'SsmMultiInstanceConfigClient' class. - The 'createSsmClient' method is trying to create an SSM client using the 'Region' class, but the region parameter is null. - The issue seems to be related to the updated axm-commons version and the usage of the 'Region' class. - The user is using Spring Boot 2.7.3 and Spring Framework 5.3.22. - The exception is causing the application run to fail.",14,"['user experience', 'unit tests', 'pulse manager', 'axm-commons version', 'ssmMultiInstanceConfigClient bean', 'createSsmClient method', 'Spring Boot 2.7.3', 'Spring Framework 5.3.22']","['exception', 'NullPointerException', 'region must not be null', 'updated axm-commons version', 'Region class', 'application run failure']",NEGATIVE,"['COLL-2698', 'COLL-2698', 'COLL-2698', 'UTF-8', 'UTF-8']" 2023-03-27T17:48:20.610319,2023-03-27T20:28:03.533539,ppant,"{'aganivada', 'ppant'}",2,- ppant fixed an issue but forgot to merge the MR. - The log will only be printed when the service is started/bounced. - There is a suggestion to release a HF version for all services using the latest observability. - It is suggested to post an FYI message in the engineering channel to avoid confusion.,4,"['issue fixed', 'suggestion to release HF version', 'post FYI message']","['forgot to merge MR', 'log printing condition']",,[] 2023-04-11T10:26:56.534809,2023-04-11T10:28:14.164499,aganivada,{'aganivada'},1,"- The discussion is about updating and deploying lambdas to the integration environment. - The person named ""aganivada"" is asking ""ppant"" if they can update and deploy the lambdas. - They suggest requesting individual teams for sanity checks after the update and deployment.",2,"['updating lambdas', 'deploying lambdas', 'sanity checks']",[],,[] 2023-08-28T20:39:52.926099,2023-08-28T22:06:01.495969,aganivada,"{'aganivada', 'pjha'}",2,"- The shared runner has limited space available, with only 30% or less remaining. - Rama raised the issue of the ask-ai job and suggested moving it to a GitLab runner. - Pjha agrees to move the ask-ai job to a custom GitLab runner. - Pjha also agrees to check other costly pipelines that can be moved to the GitLab runner.",2,"['Moving ask-ai job to GitLab runner', 'Checking other costly pipelines']",['Limited space available on shared runner'],,[] 2023-02-28T22:22:32.301219,2023-03-02T20:16:20.145439,svummidi,"{'aganivada', 'svummidi', 'bganganna', 'anair'}",4,"- The user svummidi is experiencing issues when trying to accept an invite from a different tenant. - The registration process is failing without providing a clear explanation. - It is suggested to guide the user to the login page instead of the register page if there is an existing account. - There is an existing ticket for this issue (PLAT-1452). - There is a limitation in auth0 where it asks users to sign up even if they already have an account. - Auth0 suggests adding the user directly without sending an invitation as a workaround. - It is mentioned that if the Product Owner or Org Admin adds an authorized user who is already familiar with Axiamatic, approval may not be necessary but it is still important to inform the user about the new grant.",12,"['user experiencing issues', 'existing ticket', 'workaround suggestion', 'important to inform user']","['registration process failing', 'limitation in auth0']",UNKNOWN,['PLAT-1452'] 2023-07-06T23:51:52.693079,2023-07-07T00:08:52.182739,pjha,"{'aganivada', 'pjha'}",2,"- The default time-to-live (TTL) for instances is 6 hours, but it can be changed. - The cron job currently runs every hour to check for TTL. - If an instance is deployed at 10 am with a TTL of 1 hour, it will be deleted at 11 am. If it is deployed after 10 am, it will be deleted at 12 pm. - The cron job can be modified to check every 15 minutes or even less frequently if needed. - The person suggesting the change thinks that 1 hour is too short for the default TTL and suggests setting it to 3 hours. - The person realizes that they had set the TTL to 1 hour for a specific tenancy, which was their mistake.",8,"['Default TTL can be changed', 'Cron job runs every hour', 'TTL of 1 hour for specific tenancy was a mistake']","['Default TTL of 6 hours is too short', 'Cron job can be modified to check more frequently']",NEUTRAL,[] 2023-04-04T18:06:07.358329,2023-04-04T21:19:02.244039,rtaraniganty,"{'aganivada', 'rtaraniganty', 'ppant'}",3,"- The team is facing an issue with log4j xml not being reloaded from cloud config, which requires them to bounce the service. - The team wants to fix this issue permanently. - The team previously discovered that any service with cloud bus enabled should load config. - The team will verify all collab services to ensure they have cloud-config-bus enabled. - Currently, only message broker, sentiment collector, and slack integrator collab services have cloud-config-bus enabled. - The team plans to work with someone from collab to integrate all collab services with cloud config.",5,"['fixing log4j xml reload issue', 'enabling cloud-config-bus for collab services', 'working with collab to integrate services']",[],,[] 2023-03-09T22:14:20.996399,2023-03-09T23:20:14.279699,aganivada,"{'aganivada', 'rsrinivasan', 'rvaidya'}",3,"- The average response time of the getorgs API call is significantly higher in production compared to the integration and staging environments. - The main reason for the slower response time in production is the lack of role information for users in the orgs returned by the auth0 API. - Currently, a separate call is made for each org a user is associated with to fetch the user's role, which slows down the response exponentially when a user is associated with many orgs. - Removing the exception of the system_user from the mix in production will make the response time dependent on the number of orgs the user is associated with. - Introducing caching in the backend may not have a significant impact on improving the response time. - Storing users, orgs, and roles locally can potentially speed up the response time, but it may introduce challenges in keeping the data in sync. - The team will explore other alternatives and discuss with auth0 to find a solution to fix the issue before making changes on their side. - It is unlikely that customer collaborators will be associated with multiple orgs, so this issue mainly affects vendors associated with multiple customers. - Auth0 is proposing maintaining the role and orgid metadata on user metadata or app_metadata to avoid increasing the token size, but the team needs to ensure the data is always updated. - The team agrees that a proper solution from auth0 would be the best option, but they can consider implementing a temporary solution if needed.",7,"['Exploring alternatives', 'Discussing with auth0', 'Proper solution from auth0']","['Slower response time in production', 'Lack of role information', 'Separate call for each org', 'Exponential slowdown', 'Challenges in data sync', 'Unlikely impact of caching', 'Potential speedup with local storage', 'Temporary solution']",MIXED,[] 2023-03-02T06:51:40.036509,2023-03-02T18:36:30.989379,rvaidya,"{'aganivada', 'rvaidya', 'ppant'}",3,"- There is an issue with inviting a collaborator on the same domain mapped to the tenant. - The error message indicates that the domain of some emails is not valid as per the publisher. - The issue is related to a call made by the tenancy to the core. - The SpecificTenantOnly parameter should be set to false and the tenantId to query the publisher on the domain should be 19190. - Alternatively, the SpecificTenantOnly parameter can be set to true and the tenantId to query the publisher on the domain should be 0. - There are no tenant-specific publishers. - The team will fix the issue by making the specific tenant parameter false. - The fix will be deployed to the stage version first and then pushed to int after testing. - The fix has been merged and the deployment is in progress. - The issue is considered fixed after the deployment.",11,"['issue with inviting collaborator', 'error message', 'specific tenant parameter', 'fix deployed']","['invalid domain', 'call to core', 'tenant-specific publishers', 'deployment in progress']",,[] 2023-05-30T04:10:29.544959,2023-06-01T21:21:04.477889,aganivada,"{'aganivada', 'sranjan'}",2,- There is an error occurring in the jira-task-container when querying for non-existing tenants. - There are entries for non-existing tenants in the webhook_config table that need to be deleted. - There is a tracker created for deleting these entries. - There is tenant coordination hooked to the jira service. - The error may have occurred because the system test for Jira-service did not initially delete tenants created by the system test. - Logging from io.awspring.cloud.messaging.listener was disabled in the int environment to reduce excessive logging. - There is a request to check if similar logs exist in the stage and prod environments as well.,6,"['error in jira-task-container', 'entries in webhook_config table', 'tracker for deleting entries', 'tenant coordination', 'system test for Jira-service', 'logging from io.awspring.cloud.messaging.listener', 'check similar logs in stage and prod environments']",[],,['DISC-3071'] 2023-04-10T21:19:45.522409,2023-04-11T09:43:49.177639,aganivada,"{'aganivada', 'rsrinivasan'}",2,- aganivada wants to turn on SSO as the default for Haleon and suggests discussing it on a call with rsrinivasan and anair. - rsrinivasan has enabled SSO invitation FF for Haleon. - rsrinivasan is reminded by aganivada to enable SSO for lvsands. - aganivada has added the reminder to the steps they are tracking. - There is a link provided to a Slack conversation for further reference.,5,"['SSO', 'Haleon', 'enable SSO', 'SSO invitation FF', 'lvsands']",[],,[] 2023-07-31T00:55:35.941269,2023-08-01T21:47:24.682729,askumar,"{'rtaraniganty', 'askumar', 'gshenoy'}",3,"- The team is discussing the setup and present states of the ILM (Index Lifecycle Management) policy. - They mention the need for pricing analysis for the proposed solution and suggest listing different options. - In the INT and Stage environments, they suggest having 15 days of hot data and the rest as cold data. - There is a question about whether product discovery events are still being written to ES (Elasticsearch), and it is mentioned that it is not enabled for any customers currently. - They discuss the possibility of stopping the flow and dropping the index if the product discovery events are not being consumed, but can bring it back if needed. - They also mention the plan to deprecate AWS batch processing and move it to the event framework when required. - The projected cost with two configurations is added for review. - It is suggested to start with configuration #2 and consider removing page view records if the PM team wants to use a different analysis tool. - The team agrees to keep cost records before the proposed changes for completeness.",11,"['Setup and present states of ILM policy', 'Listing different pricing options', '15 days of hot data in INT and Stage environments', 'Deprecating AWS batch processing', 'Projected cost with two configurations', 'Starting with configuration #2', 'Keeping cost records before proposed changes']","['Product discovery events not being written to ES', 'Possibility of stopping flow and dropping index']",NEUTRAL,[] 2023-09-08T01:24:21.533839,2023-09-11T00:26:54.998129,aganivada,"{'aganivada', 'bganganna', 'amishra'}",3,"- Fixes related to user removal have been deployed to int. - There are three tickets related to user removal: PLAT-2540, PLAT-2541, and PLAT-2542. - The user removal issue in PLAT-2540 is still happening. - The root cause of the issues in all three tickets is the same, but they manifest in different forms due to different use cases. - Bganganna suggests reviewing the steps in PLAT-2540 and adds that a new collaborator needs to be invited as a PO for engagement. - Bganganna also mentions an issue with PLAT-2542 where only one product is visible after removing the user from the admins page and adding them as a PO for three products. - Bganganna provides a session ID and confirms that it is not an existing session. - Amishra asks for credentials to reproduce the issue. - Aganivada mentions that PLAT-2540 was working earlier and asks Amishra to check for any backend issues. - Bganganna plans to test both issues in the stage environment and provide an update. - Bganganna confirms that PLAT-2542 worked fine in the stage environment. - Aganivada thanks Bganganna and marks PLAT-2541 and PLAT-2542 as qa-ready, but will update PLAT-2540 after debugging.",19,"['Fixes deployed to int', 'Review steps in PLAT-2540', 'Tested PLAT-2542 in stage environment']","['User removal issue in PLAT-2540 still happening', 'Issue with PLAT-2542']",,"['PLAT-2540', 'PLAT-2541', 'PLAT-2542', 'PLAT-2540', 'PLAT-2542', 'PLAT-2540', 'PLAT-2540', 'PLAT-2542', 'PLAT-2542']" 2023-01-31T18:38:08.480769,2023-01-31T20:15:43.941189,araman,"{'aganivada', 'araman'}",2,- The discussion is about onboarding a vendor through an invitation. - There is a mention of a flow where permissions were updated correctly. - The participants suggest discussing the topic further over a call.,4,"['Onboarding vendor', 'Permissions update']",[],NEUTRAL,[] 2023-07-02T23:55:11.116169,2023-07-03T23:33:30.077599,nsrivastava,"{'aganivada', 'nsrivastava', 'rtaraniganty'}",3,"- The team is discussing the need for a Docker image for the gl pipeline of the teams-integrator node.js app. - They mention that custom Docker images are published and provide examples of existing images. - They discuss whether to update the base image or create a new one, considering the size and unnecessary packages that node.js might add. - They encounter a 401 error while pulling the axm parent and discuss potential solutions. - They mention a previous issue with the slack integrator project and discuss possible tweaks to resolve the build issue. - They confirm that the build works fine locally but encounter issues when building the feature/COLL-2683 branch. - They discuss the necessary steps for building the project locally and in the gitlab-ci.yml file. - They mention that the publish block may need further tweaking. - They discuss a change in the mvn clean install command and suggest updating it to be consistent with other commands. - They agree to update the command to `mvn $MAVEN_CLI_OPTS clean install -s $CI_SETTINGS`.",26,"['Docker image for gl pipeline', 'Custom Docker images', 'Examples of existing images', 'Updating base image', 'Creating a new Docker image', 'Resolving 401 error', 'Tweaks to resolve build issue', 'Confirming build works locally', 'Building project locally', 'Building project in gitlab-ci.yml', 'Tweaking publish block', 'Updating mvn clean install command']","['Size and unnecessary packages in node.js', 'Encountering issues with feature/COLL-2683 branch']",,"['COLL-2683', 'COLL-2683', 'COLL-2683']" 2023-07-20T00:37:34.208169,2023-07-20T00:48:17.118849,kagarwal,"{'kagarwal', 'aganivada'}",2,- Kagarwal is requesting access to the Gitlab repo for Axiamatic. - Kagarwal has cc'd the team @plat in the request. - Kagarwal has also cc'd psomasamudram563 in the request. - Aganivada informs that Kagarwal has sent an invitation to join the Axiamatic-main repo in Gitlab. - Kagarwal acknowledges that they have received the invitation.,4,"['Access request for Gitlab repo', 'Invitation to join Axiamatic-main repo']",[],,[] 2023-03-19T22:26:20.990969,2023-03-20T17:49:19.268689,ppant,"{'gshenoy', 'sranjan', 'rtaraniganty', 'ppant'}",4,"- The team was looking into Jira Lambda integrations with VPC. - They found a library called ""data-brokering-libs-jira"" that is used in Jira Lambda for authentication and common interfaces. - The methods in ""data-brokering-libs-jira"" are similar to another library called ""platform-libs-epoch"". - There is no specific reason for using ""data-brokering-libs-jira"" instead of ""platform-libs-epoch"", but it was not added as a dependency because it is specific to Epoch. - They can refactor the common methods into a ""plat-commons"" like library if required. - They discussed renaming ""plat-libs-common"" to ""plat-lambda-commons"" or creating a new library for lambda-specific common methods. - There is a concern about potential typos in environment variable names for endpoints. - They decided to rename ""plat-libs-epoch"" to ""plat-libs-lambda"" and use it in all lambdas. - There was a suggestion to rename it to ""plat-libs-py-common"" to indicate that it can be used in non-lambda cases as well.",7,"['Jira Lambda integrations with VPC', 'data-brokering-libs-jira', 'platform-libs-epoch', 'plat-commons', 'plat-lambda-commons', 'plat-libs-lambda', 'plat-libs-py-common']",['typos in environment variable names'],,[] 2023-02-17T05:40:11.273499,2023-02-17T06:55:35.808789,nsrivastava,"{'aganivada', 'nsrivastava', 'rtaraniganty'}",3,"- The docker task in the SC main branch failed with an error related to publishing to ECR. - There was an error retrieving the verification key from the identity provider, which caused the failure. - The issue seemed to be intermittent, as retrying the task helped. - There was speculation that the issue could be due to changes in GitLab's certificate. - It was suggested that the issue could be a result of changes made by GitLab on a Friday afternoon in the EU.",5,[],"['docker task failed', 'error publishing to ECR', 'error retrieving verification key', 'intermittent issue', 'certificate changes', 'changes made by GitLab']",NEGATIVE,[] 2023-01-08T21:18:10.412819,2023-01-08T21:21:24.342989,aganivada,"{'aganivada', 'sparashar'}",2,"- Aganivada assigned a task (PLAT-1560) to the person being addressed. - The task has been marked for the next sprint, but if the person has bandwidth, they can pick it up. - Sparashar is being copied on the conversation. - The API mentioned in the discussion thread will be used by the Apps service.",2,"['task assignment', 'next sprint', 'bandwidth', 'API usage']",[],,['PLAT-1560'] 2023-04-24T08:32:48.511449,2023-04-24T20:53:40.456489,pjha,"{'aganivada', 'pjha', 'rtaraniganty'}",3,"- The user ""pjha"" was trying to deploy the ""slack-integrator-service"" with Route53 mapping to a load balancer DNS, but encountered an error. - The length of the ""InstanceId"" was more than 64 characters, which caused an issue. - The user ""aganivada"" suggested abbreviating ""InstanceId"" to something like ""LB"" instead of ""loadbalancer"" and questioned the repetition of ""slackintegrator"" in the instance ID. - ""pjha"" made the suggested changes and successfully deployed the tenancy without deleting the stack. - There was a discussion about merging the changes to the ""release/0.9.10"" branch and cherry-picking the changes for stage deployment. - The user ""rtaraniganty"" asked how the instance ID was eventually picked, and ""aganivada"" explained that ""loadbalancer"" was changed to ""lb"" in the instance ID. - The user ""rtaraniganty"" questioned if this change was done across the board and at a base level. - The file ""core/commons/base_app_stack.py"" was mentioned as the source of the last screenshot. - There was a suggestion to shorten the service ID and track it as a separate change to check for any side effects during deployment.",29,"['successful deployment', 'suggested changes', 'resolved issue']","['encountered error', 'length of InstanceId', 'discussion about merging changes', 'question about instance ID change']",,[] 2023-01-05T20:15:06.456839,2023-01-05T20:17:49.261199,slackbot,"{'slackbot', 'aganivada'}",2,"- Aganivada is notifying the @plat team about something. - Rsrinivasan is asking if the change will affect SSO accounts already set up in int and stage. - Rsrinivasan mentions that Auth0 does not mention anything about SAML in the email, so they believe the integration should not break due to this change.",2,"['notification', 'SSO accounts', 'integration']","['break', 'change']",,[] 2023-07-27T21:27:01.919599,2023-07-27T21:27:41.412029,aganivada,"{'aganivada', 'askumar'}",2,"- There is an issue with Auth0 where it is case sensitive while sending invitations but converts the email to lowercase once the user accepts the invitation. - To maintain consistency, it is suggested to lowercase emails before sending them to Auth0. - The issue was brought up by aganivada and askumar agrees to look into it. - rvaidya and bganganna are also included in the discussion.",2,"['Auth0 case sensitivity issue', 'Lowercasing emails']",[],,['PLAT-2370'] 2023-08-30T06:54:29.267489,2023-08-30T07:26:44.037519,aganivada,"{'aganivada', 'pjha'}",2,- aganivada wants to sync up with pjha on an AWS network diagram. - aganivada suggests using the cloudmapper tool to generate the diagram. - pjha asks for a good time to connect. - aganivada suggests connecting now. - aganivada mentions starting a collect job from cloudmapper locally to generate a report once the data is available.,4,"['sync up', 'AWS network diagram', 'cloudmapper tool', 'connect now', 'generate report']",[],,[] 2023-04-26T04:19:05.086169,2023-05-03T06:23:44.795749,aganivada,"{'aganivada', 'rtaraniganty'}",2,"- There is an issue with the private DNS resolution in perimeter 81. - The resolution fails intermittently when both int and stage route53 resolvers are configured. - Perimeter 81 suggests having mappings for both env's in the resolver. - They suggest making changes from their side to allow the route53 resolver to respond to data from other accounts. - Another option suggested is to have 2 separate networks in perimeter 81, which would cost $7/month but allow for granular rules on who can access stage. - They attempted to share the route 53 resolver from int to stage but couldn't get it working. - Another option suggested is to share the route 53 hosted zone to another account. - If these options don't work, they will consider splitting int and stage into separate gateways in perimeter 81. - There is a bug in the perimeter81 dashboard, and a new gateway will cost $50/month. - Setting up VPC peering between int and stage is suggested as a pre-requisite for resolving DNS. - Peering between devx and int was successful, so there shouldn't be any issues setting up peering between int and stage. - The cost of separating the gateway for stage is an additional $50/month. - The peering approach is preferred for now, and a gateway can be provisioned later if needed. - A how-to article on DNS config with P81 private DNS is shared for review.",9,"['Successful VPC peering between devx and int', 'Peering between int and stage suggested', 'How-to article on DNS config with P81 private DNS shared']","['Issue with private DNS resolution in perimeter 81', 'Resolution fails intermittently with int and stage route53 resolvers', 'Unable to share route 53 resolver from int to stage', 'Bug in perimeter81 dashboard', 'Cost of separating gateway for stage']",NEGATIVE,[] 2023-02-14T22:28:24.806239,2023-02-15T00:09:30.559009,aganivada,"{'aganivada', 'psomasamudram563', 'askumar'}",3,"- The team is working on building a metrics dashboard for internal consumption. - They are facing an issue with mapping IDs to names, such as tenant ID and product instance ID. - Grafana has the ability to override IDs with external values using key-value pairs. - They are exploring if DronaHQ has the ability to filter and export mappings to a CSV for use in Grafana. - psomasamudram563 is also trying to solve a similar problem in DronaHQ, where they want to show the tenant name in the UI instead of the ID. - They are testing some pointers to see if it helps with their use case. - If the solution works, they can easily export a CSV from the data table in the DronaHQ UI. - psomasamudram563 has already shown askumar what can be done and has exported a CSV for all customers from INT. - They can leverage this functionality for customers, vendors, and products in the dashboard. - They can map the functionality to other environments with an extended license. - They are targeting to integrate with prod soon.",9,"['working on building metrics dashboard', 'exploring ability to filter and export mappings to CSV', 'testing pointers for use case', 'exporting CSV from DronaHQ UI', 'leveraging functionality for customers, vendors, and products', 'mapping functionality to other environments', 'targeting integration with prod soon']","['facing issue with mapping IDs to names', 'trying to solve similar problem in DronaHQ', 'need to show tenant name instead of ID in UI']",NEUTRAL,[] 2023-07-10T07:36:02.440219,2023-07-10T07:53:25.753949,aganivada,"{'aganivada', 'askumar'}",2,- There was a task pending on merging the account-refresher to the develop branch of job-executor. - The task involved adding variables via cdk. - The task was not yet merged due to concerns about expiry. - The person responsible for the task executed it locally to avoid expiry. - There were updates to the project that might make merging the changes tricky later on. - The task was closed and the changes were merged after confirming that the pipeline had passed. - Aganivada caught the issue and asked about merging the changes.,6,"['task pending', 'merging account-refresher', 'adding variables via cdk', 'task executed locally', 'task closed', 'changes merged', 'pipeline passed', 'Aganivada caught the issue']","['concerns about expiry', 'merging changes tricky']",,[] 2023-04-11T10:19:06.774449,2023-04-11T10:22:16.463319,aganivada,"{'aganivada', 'mnirmal'}",2,- Aganivada is asking Mnirmal to share a collab ticket related to the pulse metrics product. - Mnirmal provides the link to the collab ticket: . - Aganivada thanks Mnirmal for sharing the ticket. - Rtaraniganty is added to the ticket for pulse metrics.,3,"['collab ticket', 'pulse metrics']",[],,['COLL-2197'] 2023-04-03T10:17:14.836589,2023-04-04T03:13:34.932379,aganivada,"{'aganivada', 'bganganna', 'rsrinivasan', 'rvaidya'}",4,"- There is a flow where a vendor user sends an invitation to a product admin. - Two methods are mentioned, but it is unclear which one is used in the vendor inviting customer flow. - During onboarding, when a vendor is inviting a customer, the POST /users/invitation/{tenantId} endpoint is called, and no permissions are passed. - The issue of sending empty permissions and setting usermetadata as an empty array is identified and being fixed. - It is mentioned that this flow may have regressed after changes were made for invite with permissions during the collaborators flow. - There is a request to raise an Auth0 ticket regarding the lack of audit logs when user metadata is updated via invitation for an existing user.",14,"['Flow of vendor user sending invitation to product admin', 'Identification of issue with empty permissions and usermetadata', 'Fixing the issue', 'Request to raise Auth0 ticket']","['Unclear which method is used in vendor inviting customer flow', 'Regression after changes for invite with permissions during collaborators flow', 'Lack of audit logs when user metadata is updated via invitation for existing user']",MIXED,[] 2023-07-12T23:48:41.402539,2023-07-13T23:19:16.955279,aganivada,"{'aganivada', 'ppant'}",2,- aganivada is requesting the subteam @plat to review some FAQ's on multi-instance deployment. - aganivada wants ppant and pjha to review the document and share their feedback. - ppant has reviewed the document and finds it good (LGTM).,3,"['multi-instance deployment', ""FAQ's""]",[],,[] 2023-08-31T20:48:11.898339,2023-09-01T21:45:53.381329,aganivada,"{'aganivada', 'anair'}",2,"- The discussion is about handling a flow where the SSO admin assigns an app to a user without an invitation. - The change seems technically feasible, but the team wants to know when they can target the change. - There have been a couple of new asks that have come up recently, including an Intercom integration and platform changes for a new UI stack deployment. - The team is considering making the change in version 0.9.16. - The flow being discussed is the ""pacden flow"" where an admin added a user directly in IDP, but the user was seeing a blank page on login because the org/role was not assigned to them.",3,"['Technically feasible change', 'Targeting the change', 'Intercom integration', 'Platform changes for new UI stack deployment', 'Version 0.9.16']",['Blank page on login'],,[] 2023-09-03T21:58:32.798009,2023-09-04T15:56:53.482329,aganivada,"{'aganivada', 'bganganna', 'mli', 'snangia'}",4,"- There was an error in configuring pulse for the platform tenant in the weekly retro tenant. - The configuration was still active even though the instance was cancelled, which caused the error. - The configuration was disabled in the database to resolve the issue. - Cancelling a pulse instance puts it in a cancelled state and it doesn't show up on the UI, making it impossible to modify the configuration from the UI. - There is a tracker ticket to address this issue and modify the user experience for cancelled instances. - Without resolving this issue, handling scheduled pulse configurations would be tricky and confusing for customers. - There are a few options discussed for handling cancelled pulse instances, including deleting the configuration/parent series, asking users if they want to stop the entire pulse series, or showing cancelled pulses in a separate section for users to view and create similar pulses. - Blocking users from creating new pulses with the same configuration as an existing one feels unintuitive, and it would be more helpful to inform users about the existing pulse and let them decide if they want to continue launching a new one.",13,"['Configuration issue resolved', 'Tracker ticket created', 'Options discussed for handling cancelled pulse instances', 'Improved user experience']","['Error in configuring pulse', 'Confusion for customers', 'Unintuitive blocking of pulse creation']",NEGATIVE,"['DISC-3573', 'COLL-2590', 'DISC-3573']" 2023-07-03T05:17:44.908239,2023-07-04T01:25:20.881849,aganivada,"{'aganivada', 'ppant'}",2,"- The team is discussing the issue of not sending invites for certain invitation types. - They realize that the templates and enums were not added correctly in the switch statement. - They decide to add the missing enums and templates for the orgadmin inviting collaborator. - There is an issue with the template, and they make changes to fix it. - They test the changes and encounter an exception, but they continue to troubleshoot and make further adjustments. - They successfully send an email using the new template, but notice that some fields are not updating properly. - They identify that the method for building the email template object is missing some parameters. - They discuss the need to pass the invitee name in the API. - They make changes to the method and continue to work on the issue. - They plan to submit the changes for review and discuss the need to skip populating the SSO name for organizations without SSO enabled. - They make additional changes for the tests to pass and receive approval from ppant.",43,"['Adding missing enums and templates', 'Fixing template issue', 'Troubleshooting and making adjustments', 'Successfully sending email using new template', 'Identifying missing parameters in email template object', 'Discussing passing invitee name in API', 'Making changes to the method', 'Submitting changes for review', 'Receiving approval from ppant']","['Not sending invites for certain invitation types', 'Encountering exception while testing changes', 'Fields not updating properly in email template', 'Skipping populating SSO name for organizations without SSO enabled']",POSITIVE,['PLAT-2243'] 2023-08-23T13:45:31.019669,2023-08-24T04:11:55.416469,anair,"{'aganivada', 'bot_message', 'anair', 'rsrinivasan', 'bganganna'}",5,"- The team is discussing enabling SAML as the default login for the PacDen/17160 platform. - There is an issue with new users at EA and Pacden not being invited via SSO, even though they were invited on the UI. - The team is considering moving the dependency from ff (feature flag) and doing it via code. - The team is discussing enabling the tenancy.security-settings-sso-invitation for all tenants in prod. - There is a flag related to SSO flow that needs to be turned on during onboarding SAML steps for EA and Pacden. - The team is testing the admin invitation flow in stage and planning to enable it in prod. - There is an error message ""access_denied&error_description=the%20specified%20account%20is%20not%20allowed%20to%20accept%20the%20current%20invitation"" that needs to be investigated. - The team is considering using at least one tenant in stage with SAML since all major customer setups are using SAML flow. - The team is discussing the ticket PLAT-1760 and whether it should be picked up in version 0.9.15.",23,"['enabling SAML as default login', 'moving dependency from ff to code', 'enabling tenancy.security-settings-sso-invitation', 'testing admin invitation flow', 'using SAML for major customer setups']","['issue with new users not being invited via SSO', 'error message investigation', 'ticket PLAT-1760']",,"['TEST-1100', 'TEST-1100', 'PLAT-1760']" 2023-02-21T07:27:03.662999,2023-02-21T21:10:09.017419,pjha,"{'aganivada', 'pjha'}",2,- pjha and aganivada are discussing and reviewing a document or webpage related to GitLab Runner Integration. - aganivada has added some comments and is asking pjha to check them. - pjha has replied to the questions or comments made by aganivada.,3,['GitLab Runner Integration'],[],,[] 2023-08-23T20:48:53.959189,2023-08-24T09:52:55.640899,rtaraniganty,"{'araman', 'rtaraniganty'}",2,"- The shout-out message was generated using a human-machine-human pipeline, involving sanitizing kudos messages, running them through ChatGPT, and then editing the output. - ChatGPT did not give credit to the QA or systest team and left them anonymous. - Ask-AI, on the other hand, highlighted the systest team in the retro summary. - The discussion acknowledges that with AI, there are both wins and losses.",4,"['human-machine-human pipeline', 'sanitizing kudos messages', 'Ask-AI', 'wins']","['ChatGPT', 'QA team', 'systest team', 'losses']",,[] 2023-02-14T23:27:42.081179,2023-02-25T05:12:00.922979,araman,{'araman'},1,"- The discussion thread includes a conversation between two individuals, araman and bganganna. - The conversation mentions different levels of difficulty for tasks, including Trivial, Medium, Hard, Rare, On-Call, and Automation. - The user rtaraniganty agrees with the mentioned levels of difficulty. - araman informs bganganna about something, but the specific details are not provided.",2,"['Levels of difficulty', 'Agreement on difficulty levels']",['Missing details'],,[] 2023-07-19T22:41:42.731509,2023-07-19T22:48:02.324829,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,- The user rtaraniganty is experiencing an issue with the ECS panel in the AWS console. - The user aganivada is able to see the ECS panel and provides a URL for it. - rtaraniganty asks for the URL from the first screenshot and suggests trying a different URL. - aganivada confirms that the URL works for them and suggests that the issue may be related to rtaraniganty's browser or VPN. - rtaraniganty suspects that the problem may be with Safari and mentions that they can see the page on Chrome.,15,"['ECS panel', 'URL works', 'Chrome']","['issue with ECS panel', 'different URL', 'Safari']",,[] 2023-03-21T11:10:20.190789,2023-03-23T06:49:04.622969,rtaraniganty,"{'aganivada', 'sranjan', 'pkarthikeyan', 'gshenoy', 'rtaraniganty', 'ppant'}",6,"- The group is discussing the use of specific libraries for their jira lambda integration with VPC. - Guru suggested refactoring the common methods into a plat-commons library. - Prabhu Pant proposed renaming plat-libs-epoch to plat-libs-lambda. - Rama suggested renaming it to plat-libs-py-common, as it could be used in non-lambda cases as well. - Sanjiv Ranjan proposed making a tracker ticket for the changes. - Anil and Prabhu Pant discussed the logging occurring for tenant 19190 and found that the Ingestion Service was producing lots of debug logs. - Anil suggested that the log level be reverted to info. - Sagarika Nangia agreed to do it the next day. - Anil also suggested that Paran should use the tenant name resolver API instead of making multiple API calls. - Anil asked Prashant about his progress on researching a load-balancer. - Prashant replied that he was stuck with other issues related to setting up multiple instances but has now fixed them and will begin looking into the load-balancer. - Anil and Ashwani are discussing their attempts to override the context for their transaction manager. - Anil and Rama discussed the possibility of having a LoadBalancer (LB) instead of IP addresses in Route53. - They found that more requests are made directly to the tenancy which suggests requests are not going through the load balancer. - They agreed to register the LB with the service name, update CDK, and deploy all services once to switch from xxx to xxx-lb. - Anil found an article that discussed setting up AWS alias DNS names which would route traffic to an Elastic Load Balancing load balancer. - Prashant Jha shared a link about auto-deploying docker images to ECR. - Rama suggested further discussing how it works and how to properly implement it. - A tracker ticket was created for point# 1 above.",10,"['refactoring common methods into plat-commons library', 'renaming plat-libs-epoch to plat-libs-lambda', 'renaming plat-libs-epoch to plat-libs-py-common', 'creating a tracker ticket for changes', 'reverting log level to info', 'using tenant name resolver API', 'progress on researching a load-balancer', 'overriding context for transaction manager', 'using LoadBalancer instead of IP addresses in Route53', 'registering LB with service name', 'setting up AWS alias DNS names', 'auto-deploying docker images to ECR']","['Ingestion Service producing lots of debug logs', 'being stuck with other issues related to setting up multiple instances']",POSITIVE,"['PLAT-1840', 'PLAT-1839', 'PLAT-1839', 'PLAT-1840']" 2023-07-25T21:23:35.964489,2023-08-03T11:31:34.504389,askumar,"{'aganivada', 'askumar', 'ppant'}",3,- askumar is asking the team to review a proposal document for Auth0 User permission replication in Core. - ppant has added a few questions regarding the proposal. - askumar has updated the document to simplify data storage structures. - aganivada was occupied with CXO dashboard work and will review the document the next morning. - aganivada has added some inline comments to the document and suggests switching to a design template and filling in as many sections as possible. - aganivada provides reference links for design templates and examples. - aganivada suggests using puml diagrams for database relation representation and sequence diagrams to explain the flow. - aganivada also suggests using component diagrams from Structurizr. - askumar agrees to make the suggested changes and move the document under the specs folder. - askumar informs aganivada that the comments have been addressed and will get in touch for review and further changes.,9,"['review proposal document', 'simplify data storage structures', 'addressed comments', 'make suggested changes']",['occupied with CXO dashboard work'],,[] 2023-03-01T12:23:12.166039,2023-03-01T19:37:01.383799,svummidi,"{'aganivada', 'svummidi'}",2,"- The user, svummidi, is asking if there is a way to access the tenant selection page without logging out and logging back in. - Aganivada is unsure about how to achieve this but suggests that it should be possible with some UI effort. - The ability to access the tenant selection page without logging out and logging back in is dependent on the UI logic.",2,[],[],NEUTRAL,[] 2023-06-12T09:14:36.112839,2023-06-14T07:33:49.334849,snangia,"{'pjha', 'askumar', 'rsrinivasan', 'svummidi', 'rtaraniganty', 'snangia'}",6,"- The user-action service and message-broker are stuck in the ""update_rollback_in_progress"" state. - There are bean creation failures and unsatisfied dependencies in the code. - The desired count of the services can be set to 0 to stop the existing services and then deploy. - There is a suggestion to cancel the deployment using ""Cancel Update Task"" on the AWS CloudFormation page. - The rollback of the stack has been completed. - There are exceptions related to parsing notification events and empty scores in Druid. - There is a suggestion to add checks in the code to handle expected cases and log info or warn logs instead of stack traces.",44,"['rollback_completed', 'deployment_suggestion', 'code_improvement_suggestion']","['service_stuck', 'bean_creation_failures', 'unsatisfied_dependencies', 'parsing_exceptions', 'empty_scores_exceptions']",NEGATIVE,['COLL-2545'] 2023-05-05T10:51:41.999949,2023-05-09T09:23:15.286099,rtaraniganty,"{'aganivada', 'askumar', 'rtaraniganty'}",3,"- The RDS/Aurora instances in the INT environment are being discussed and it is suggested to switch off the instances that are not being used to avoid unnecessary costs. - The naming of the two Aurora clusters in INT is not following any pattern and it is suggested to review and align them with the production naming convention. - There is a test cluster called ""Commons cluster"" that is being used for testing dumps and scripts. It was temporarily stopped over the weekend but will be closed soon. - The other instances that are not being used will be stopped, kept for a couple of days, and then deleted within the week. - It is suggested to check with Nayan or anyone involved if the replication postgres instance is needed. - The cluster names in INT and STAGE will be kept as ""aslv2-common-int"" and ""aslv2-common-stage"" to align with the production names. - The conversation is also directed to pjha for their input or confirmation.",11,"['switch off unused RDS/Aurora instances', 'review and align Aurora cluster naming', 'stop and delete unused instances', 'keep cluster names in INT and STAGE', 'check if replication postgres instance is needed', 'direct conversation to pjha']",[],,[] 2023-05-15T04:03:44.957249,2023-05-15T09:38:40.797209,aganivada,"{'aganivada', 'pjha', 'rtaraniganty'}",3,"- There was a minor import issue with coordination building blocks during the deployment of lambda's to int with CDKv2. - Prashant found an issue while deploying trigger eventbridge changes, and he is attempting to fix it from CDK. - Prashant shared a summary of the error they saw during deployment in the synth.txt file. - Prashant made some changes in the code, and rtaraniganty reviewed and approved them. - They are planning to merge the changes into the develop branch and deploy version 9.11 to the stage using v2 code. - They will monitor for any issues during the week before deploying to the stage.",7,"['coordination building blocks', 'fixing trigger eventbridge changes', 'code changes', 'review and approval', 'merging changes', 'deployment to stage']","['import issue', 'error during deployment']",POSITIVE,[] 2023-07-13T16:20:15.384039,2023-07-17T22:23:26.647069,rtaraniganty,"{'aganivada', 'bganganna', 'rtaraniganty', 'mnirmal'}",4,"- There is junk data in the prod-db that needs to be cleaned up. - The junk data is present in the ""publisher"", ""product_instance"", ""product"", and ""product_category"" tables. - The maximum created_at dates for these tables indicate that no junk data has been added after June 23rd. - Instead of invoking the tenant deletion flow for thousands of tenants, it is suggested to run scripts to clear up the junk data. - There are 1994 tenants corresponding to the 4416 junk publishers. - It is important to check for any stale pulses in prod before cleaning up the publishers and products to avoid triggering unexpected alerts. - The TIDs for ""%{APPS_TENANT_ID_TEAMS}"", ""%{APPS_TENANT_ID_SLACK}"", and ""%{APPS_TENANT_ID_WEBAPP_ONLY}"" need to be checked for stale pulses. - The queries to check for active and deleted pulses are provided. - There are 7 active pulses and 3300 deleted pulses. - It is suggested to delete the core-data entries for the junk data. - The updated counts for the ""publisher"", ""product_instance"", and ""product"" tables are provided. - The records of sys_test tenants currently mapped in sys_test_details.json have been cleaned up. - For the remaining records, the tenant_id's except ""0"" need to be exported from the db to a text file and a robot job needs to be run to trigger deletion for each tenant. - The missing records of tenants are likely the bulk of the junk data. - Caution needs to be exercised during the cleanup process. - A db backup will be taken before running the cleanup and it will be scheduled for the weekend or next Monday morning.",30,"['junk data cleanup', 'prod-db', 'tables', 'created_at dates', 'scripts', 'tenants', 'pulses', 'queries', 'core-data entries', 'counts', 'sys_test tenants', 'db backup']","['stale pulses', 'unexpected alerts', 'deleted pulses', 'cleanup process']",NEUTRAL,[] 2023-03-23T07:08:38.710029,2023-03-23T12:15:52.520679,aganivada,"{'aganivada', 'rtaraniganty', 'rvaidya', 'pkarthikeyan'}",4,"- The team is discussing the usage of a name resolution API for the tenant table. - They are referring to an enum for the resolution target types. - The use of a colon in the typeName is not required. - The payload for the name resolution query should include the typeName and key. - The key can be a long or string value. - The team is considering using the Java classes generated by OpenAPI for the name resolver API. - The name resolver API can be used from apps, not just lambda functions.",19,"['usage of name resolution API', 'referring to enum for resolution target types', 'payload for name resolution query', 'considering using Java classes generated by OpenAPI', 'name resolver API can be used from apps']","['colon in typeName is not required', 'key can be long or string value']",,[] 2023-03-17T08:02:52.682699,2023-03-17T08:31:01.283089,ppant,"{'rtaraniganty', 'askumar', 'ppant'}",3,"- The lambda function is making API calls to non-existing APIs of the ingestion and score-provider services. - These API calls are throwing 404 errors, but the tenant deletion lambda is treating them as if the tenant didn't exist. - The step functions are not erroring out because of this behavior. - The lambda function should add a check to avoid these calls. - The issue occurs when the services are called at their correct endpoints, resulting in 401 errors and causing the step functions to fail. - The 404 errors may be expected if the called service has not implemented the tenant deletion API. - The fix for this issue would be in the state machine JSON. - It is suggested to drop the ingestion service altogether from the list if it has not integrated yet.",6,"['Lambda function making API calls to non-existing APIs', ""Tenant deletion lambda treating 404 errors as if tenant didn't exist"", 'Step functions not erroring out', 'Lambda function should add a check to avoid these calls', 'Fix for this issue in state machine JSON', 'Dropping ingestion service if not integrated']","['API calls throwing 404 errors', '401 errors causing step functions to fail']",NEGATIVE,[] 2023-05-04T03:55:49.854329,2023-05-04T04:24:31.695419,aganivada,"{'aganivada', 'askumar'}",2,- There is a spike in logging observed in production. - The WARN logs indicate that the GetRecords request is being throttled due to rate exceeding for a specific shard. - The issue is considered a retryable exception and the service usually recovers on retrying. - It is suspected that the stream may be getting overloaded. - There may be excessive logging due to debug logs or exceptions in the INT and stage environments. - It is discovered that the excessive logging is a false alarm caused by a threshold set during testing. - The kinesis throttling issue is identified and an alert will be added to monitor it. - It is suggested to check if resources on the kinesis front need to be increased or the polling interval needs to be reduced. - The size of debug level logs in INT appears to be acceptable for a one-day period.,8,"['service recovery', 'kinesis throttling issue', 'alert monitoring', 'resource increase', 'polling interval', 'debug level logs']","['spike in logging', 'GetRecords request throttling', 'stream overload', 'excessive logging', 'false alarm', 'threshold setting', 'INT environment']",NEUTRAL,[] 2023-07-11T22:41:56.958359,2023-07-11T22:46:42.775659,aganivada,"{'aganivada', 'pjha'}",2,- The discussion is about whether AWS allows one proxy to serve multiple RDS instances for production. - The pricing logic suggests that it is not possible for one proxy to serve multiple RDS instances. - It is mentioned that each proxy can have a single target.,2,[],['AWS does not allow one proxy to serve multiple RDS instances'],NEGATIVE,[] 2023-07-10T08:53:14.218519,2023-07-10T09:31:50.684549,rsrinivasan,"{'aganivada', 'rsrinivasan', 'rtaraniganty'}",3,"- The team is discussing the need to enable the ""Newline delimiter"" when sending a Kinesis stream to S3. - Enabling the delimiter would allow for easier loading in Athena and performing SQL filters and group by operations. - The missing delimiter is causing only the first record in a batch to be shown in Athena. - A backlog has been added for making this change in CDK. - There is a suggestion to manually enable the flag for now if it is a blocker. - It is mentioned that the change can be applied to all Kinesis streams with dynamic partitioning enabled. - There is a discussion about who can make the change in production and the possibility of testing before deploying the CDK change. - A link is shared with instructions on how to configure dynamic partitioning for Kinesis in CDK. - There is a mention of a kinesis_stack and kinesis_delivery_stream_stack that could potentially be refactored to eliminate duplication.",11,"['Enabling Newline delimiter', 'Easier loading in Athena', 'SQL filters and group by operations', 'Adding backlog for CDK change', 'Manually enabling flag', 'Applying change to all Kinesis streams', 'Testing before deploying CDK change', 'Configuring dynamic partitioning for Kinesis in CDK', 'Refactoring kinesis_stack and kinesis_delivery_stream_stack']","['Missing delimiter', 'Only first record shown in Athena', 'Uncertainty about who can make the change in production']",UNKNOWN,['PLAT-2291'] 2023-03-03T05:07:24.306539,2023-03-07T11:47:56.344119,rsrinivasan,"{'akalyta', 'svummidi', 'rsrinivasan'}",3,"- The team is reviewing topics generated from the last pass data. - The topics in the last test results were lengthy, but they are not found in the current data. - To validate the quality of the topics, they are looking for topics of maximum and minimum length. - The prompt has been modified to include ranked keywords, emotion, and product aspect. - The team ran retro comments with the updated prompt. - There is an interesting article comparing GPT-3.5 and Davinci models. - GPT-3.5 is being used for proof of concept, but it cannot be trained yet. - GPT-3.5 is cheaper and faster. - The team is curious if GPT-3.5 can be used for summarization. - In the enriched retro comments data, the last entries in the ""turbo_topics"" column represent the emotion.",8,"['reviewing topics', 'validating topic quality', 'modified prompt', 'interesting article', 'GPT-3.5', 'summarization']","['lengthy topics', 'missing topics', 'GPT-3.5 cannot be trained']",NEUTRAL,[] 2023-08-22T21:00:34.256229,2023-09-06T20:26:46.670349,aganivada,"{'aganivada', 'bot_message', 'anair', 'bganganna', 'askumar'}",5,"- There were changes made to the system_user masking in invitation emails. - The changes were deployed to the STAGE environment and tested. - However, there were still issues with the system_user name appearing in the email and on the sign-up page. - It was suggested to add the system user as a PIU for the tenant and then invite the vendor. - The issue with the system_user name appearing on the sign-up page was identified as a setting in auth0. - The suggestion was made to hide the entire inviter field in the auth0 settings. - It was also suggested to rename the system_user in the auth0 database to ""your org"" to handle the issue without completely removing the inviterName field. - The inviterName field is fetched from auth0 and not stored in the company's database. - The final decision was to replace {inviterName} with ""your org"" in the auth0 message text.",24,"['changes made to system_user masking', 'changes deployed to STAGE environment', 'suggestion to add system user as PIU', 'suggestion to hide inviter field in auth0 settings', 'suggestion to rename system_user in auth0 database', ""final decision to replace inviterName with 'your org' in auth0 message text""]","['issues with system_user name appearing in email and sign-up page', 'issue with system_user name on sign-up page identified as setting in auth0']",POSITIVE,['TEST-1095'] 2023-05-31T07:58:51.831239,2023-05-31T17:50:34.092369,aganivada,"{'aganivada', 'rtaraniganty'}",2,"- aganivada is asking for a review of a document on the platform epics. - rtaraniganty suggests using Swanly instead of or in addition to the document. - aganivada agrees to move the document to Swanly for tracking execution, but finds Confluence easier for planning.",3,"['document review', 'Swanly']",['Confluence'],,[] 2023-06-13T22:36:27.555169,2023-06-14T08:33:39.178879,rtaraniganty,"{'rtaraniganty', 'rvaidya', 'hchintamreddy', 'svummidi', 'askumar'}",5,"- The team is considering using Snowflake for storing customer metrics. - Snowflake performs well with large amounts of data and is cost-effective. - The average query time for 1.1K records in the PulseInstance table is around 1-2 seconds. - The team is also considering using Druid for the metric data. - The biggest risk of using Druid for this use case is its immutability. - Victoria db was used for storing value metrics in SHN. - The team did not consider using Druid for value metrics as they were looking for Prometheus compatibility. - The team plans to gather more information about the amount of data and prototype some queries to determine the feasibility of using Snowflake. - Sub-second response time is desired for the UI, so query performance is important. - Snowflake is an OLAP system and query times depend on the amount of data being pulled. - Snowflake stores data in S3 in segments and uses a master index for fetching segments. - Storing metadata along with the metric is important for flexible reprocessing of the data. - The team may need to do a proof of concept without trying out all the options. - Consideration is being given to using Druid along with Snowflake for keeping things similar to value score. - Dumping data into Snowflake and running a pipeline on top of that data to generate synthesized data stored in Druid is another option being considered. - Storing raw data in Snowflake and synthesized data in a faster store (RDBMS) is a common approach to bridge the needs of an OLAP system with a UI.",23,"['Snowflake', 'query performance', 'cost-effective', 'large amounts of data', 'prototype queries', 'flexible reprocessing', 'keeping things similar to value score', 'storing raw data in Snowflake', 'synthesized data in a faster store']","['Druid immutability', 'not considering Druid for value metrics']",POSITIVE,[] 2023-05-22T21:02:16.276599,2023-05-23T08:07:31.473709,aganivada,"{'aganivada', 'bganganna', 'mnirmal', 'ppant'}",4,"- There is an issue with the email template ""collab-web-survey-v1"". - The issue is not related to the VPN changes. - The last testing of the template was done in the last release. - The template was updated in the int environment, but the stage environment still has an older version. - There are handlebar errors in the stage environment. - There are new placeholder variables in the latest commit. - The websurvey template needs to be updated in the stage environment. - The deployment of version 0.0.18.1 SC on stage is planned. - Mnirmal is assigned a task related to COLL-2479.",18,"['email template issue', 'template testing', 'template update', 'handlebar errors', 'placeholder variables', 'template deployment', 'task assignment']",['VPN changes'],UNKNOWN,"['PLAT-2067', 'COLL-2479']" 2023-06-12T00:51:54.545279,2023-06-14T07:44:31.272749,askumar,"{'pjha', 'rtaraniganty', 'askumar'}",3,"- The team is experiencing high CPU utilization in the Aurora clusters in the INT environment. - They have migrated 5 services to the INT environment. - The team is considering increasing the max vCPU configuration in INT or using multiple clusters due to the high number of connections. - One suggestion is to increase the min and max vCPU as needed. - The current vCPU configuration is 2, but it is not necessary to increase it to 8. - The sizes of the RDS instances in INT are db.t3.medium and db.t3.small. - The team could try increasing the min and max ACU (Approximate Compute Units) to see if it improves the CPU utilization. - There are currently 5 services connected to the RDS instances in INT. - The team should consider using a proxy concept to reduce the DB connection requirement, especially for the Collab service. - Prashant should be consulted regarding the proxy concept. - Collab service has about 160 connections in RDS commons.",22,"['high CPU utilization', 'migrated services', 'increasing vCPU configuration', 'increasing ACU', 'proxy concept', 'consulting Prashant', 'Collab service connections']",[],,[] 2023-05-10T22:51:48.556359,2023-05-11T03:32:33.550679,gshenoy,"{'aganivada', 'pmangalapuri', 'gshenoy', 'ppant'}",4,"- There is an issue with the tenancy API, as it is returning a 403 error. - VPN has been enabled on the tenancy, vault, and notification service, so signing in to the VPN is required when attempting to access them locally. - The API is failing when generating reports and requests are made from Lambda via puppeteer. - The Lambda functions should be talking over private DNS. - The authorizer attached to the dashboard app has not been updated, which may be causing unauthorized access. - The reporting Lambda is calling all three integrated services at their private endpoints. - There may be an issue with connecting to the vault. - The team is planning to check the token passed to the dashboard app from the Lambda function.",14,"['VPN enabled on tenancy, vault, and notification service', 'Lambda functions talking over private DNS', 'Authorizer attached to dashboard app not updated', 'Reporting Lambda calling integrated services at private endpoints', 'Token passed to dashboard app from Lambda function']","['Issue with tenancy API returning 403 error', 'API failing when generating reports', 'Issue with connecting to vault']",,[] 2023-01-09T22:45:51.941449,2023-01-24T23:09:51.760769,rvaidya,"{'aganivada', 'rvaidya', 'anair', 'rsrinivasan', 'svummidi', 'gshenoy', 'rtaraniganty'}",7,"- The discussion highlights the need for normalized details in audit logs and the challenges of ID resolution. - There is a debate about whether to convert everything to a name in the context or resolve it later. - It is suggested to implement simple APIs that return names given IDs and potentially use Redis for optimization. - There is a proposal to build a single name resolver API at the service level for different types of objects. - The scope of the audit log is discussed, including the need for a product context and the retention period of data. - There is a suggestion to flag records as archived for delete cases to potentially recover data. - The need for user names in the audit log is discussed, and it is suggested to use email if the name is not available. - The complexity and need of the audit log are discussed, and it is mentioned that it will be used rarely. - There is a plan to discuss backend dependencies and best practices in the future. - The possibility of using event notification to enrich the audit log is mentioned. - Questions are raised about logging failed states and logging context in async events.",75,"['Normalized details in audit logs', 'Implementing simple APIs for ID resolution', 'Building a single name resolver API', 'Archiving records for delete cases', 'Using email as a fallback for user names', 'Discussing backend dependencies and best practices', 'Enriching the audit log with event notification']","['Challenges of ID resolution', 'Scope of the audit log', 'Complexity and need of the audit log', 'Logging failed states', 'Logging context in async events']",,['PLAT-1301'] 2023-03-28T08:54:02.813839,2023-03-28T21:21:47.274449,aganivada,"{'aganivada', 'rtaraniganty'}",2,"- The VPN overview and planning document has been shared for review. - The existing architecture has some suggestions for improvement, such as adjusting the placement of the Swagger label and renaming ""RDS Access"" to ""SSH access."" - It is suggested to label less common icons and indicate S2S token acquisition via Cognito in the existing architecture. - In the updated architecture, it is recommended to indicate the authentication token(s) involved on the lines left of API Gateway. - There is a question about the security implications of Eventbridge API calls to access service, which is protected with S2S token authentication.",3,"['VPN overview and planning document shared', 'Suggestions for improving existing architecture', 'Labeling less common icons', 'Indicating S2S token acquisition via Cognito', 'Recommendation for indicating authentication tokens in updated architecture']",['Question about security implications of Eventbridge API calls'],,[] 2023-07-11T22:55:51.400469,2023-07-12T00:44:22.144059,aganivada,"{'aganivada', 'askumar'}",2,- There is a job scheduled to run once a day at 5:30 am IST. - The user wants to trigger the job for testing without updating the schedule. - The deployment triggers all jobs. - The backup job failed to trigger for some reason. - There is an error in the job. - The job-executor needs to be deployed in stage and prod. - CDK changes are required for the deployment. - The CDK changes can be cherry-picked or deployed from the develop branch. - The azure account refresh changes have been verified. - No additional keys need to be added for the deployment.,16,"['job scheduling', 'deployment triggers', 'job-executor deployment', 'CDK changes', 'azure account refresh changes', 'deployment keys']","['backup job failure', 'job error']",,[] 2023-01-12T03:41:51.639469,2023-01-12T10:19:45.549719,askumar,"{'aganivada', 'rtaraniganty', 'askumar'}",3,"- There was an alert triggered due to a tenant clean up, but there were no systemic issues. - The API gateway denied a request due to a malicious URL. - The IP address 49.205.251.29 was identified as belonging to Beam Telecom Pvt Ltd in India. - The issue was caused by a script that was being tested and had incorrect values while parsing. - The script has been fixed and stale tenants have been cleaned up. - The person responsible for the script is located in Hyderabad. - The person's internet provider is actcorp. - The IP address was investigated earlier by someone named Seshan. - The discussion emphasizes the importance of investigating suspicious activities to catch potential hacks early.",13,"['tenant clean up', 'script fix', 'stale tenants cleaned up', 'investigating suspicious activities', 'catching potential hacks early']","['alert triggered', 'API gateway denial', 'malicious URL', 'incorrect values while parsing', 'person responsible for the script', 'investigation of IP address', 'potential hacks']",POSITIVE,[] 2023-08-30T08:43:15.545539,2023-09-11T20:55:44.574959,svummidi,"{'aganivada', 'pjha', 'hchintamreddy', 'svummidi', 'rtaraniganty', 'snangia'}",6,"Key Insights from the Discussion Thread: - The event bridge rule for delivering pulses is inactive and de-authorized. - The issue with the event bridge started around August 26th. - There was an error from AWS while fetching tokens from Cognito, causing random failures in both stage and prod environments. - The team has redeployed trigger connections to fix the issue. - An alert has been deployed in prod for failing event bridge rules. - The team is waiting for AWS to debug the failing connections. - AWS recommended updating the secret to recover the connection instead of destroying and deploying. - AWS noticed that the ApiDestination with connection arn:aws:events:us-west-2:643306803378:connection/production-S2S-Token-trigger-user-action-nudge-Connection/5ebb9fcd-f476-4258-9f5f-c07b2548209c started to fail consistently starting 2023-08-28T12:20:59.224Z. - The requests to the OAuth endpoint to get credentials are taking longer than the 5-second timeout, resulting in the connection being deauthorized. - The team is using AWS Cognito for OAuth and most services reach Cognito to generate tokens. - There is no option in the AWS console to debug why the Cognito OAuth endpoint is taking more than 5 seconds. - The team requested metrics or logs from the Cognito user pool linked to the failing connection. - The team is unsure how to recover the resource (API destination) back to active status.",95,"['Redeployed trigger connections to fix the issue', 'Alert deployed in prod for failing event bridge rules']","['Event bridge rule for delivering pulses is inactive and de-authorized', 'Issue with event bridge started around August 26th', 'Error from AWS while fetching tokens from Cognito', 'Failing connections to ApiDestination', 'Requests to OAuth endpoint taking longer than timeout', 'No option in AWS console to debug Cognito OAuth endpoint', 'Resource (API destination) status recovery']",NEGATIVE,"['COLL-2702', 'COLL-2857']" 2023-04-25T05:16:24.011519,2023-04-26T11:07:36.567849,pjha,"{'aganivada', 'pjha', 'rtaraniganty'}",3,"- The discussion is about reviewing and querying CloudFront logs from an S3 bucket in Athena. - There is a suggestion to create Athena tables in all three environments and link them to the bucket. - The tables have been created in the Int and Stage environments, and will be created in production once changes are deployed. - There is a minor spelling issue in the database name that needs to be fixed. - The provisioning process is being discussed, and it is unclear whether it is done via CDK or involves manual steps.",4,"['Reviewing and querying CloudFront logs', 'Creating Athena tables in different environments', 'Provisioning process']","['Spelling issue in database name', 'Unclear provisioning process']",,[] 2023-05-16T03:13:42.734789,2023-05-17T03:57:27.971719,aganivada,"{'aganivada', 'sranjan', 'pmangalapuri', 'bganganna', 'araman'}",5,"- The team is deploying a vault-access service with CDK changes to fix an authentication issue. - Reports seem to be working fine after the deployment. - The team discusses the use of vault in other flows, such as web surveys and Jira web hooks. - They decide to whitelist the `survey` and `jira-webhook` endpoints. - The team plans to include regression tests for reporting, web-app survey, and jira-webhook in the next sprint. - There is an ongoing issue with the Jira web hook test, which is being looked into. - The team discusses the restricted access to vault for all endpoints except for `/jira-webhook` due to VPN changes.",18,"['deployment of vault-access service', 'CDK changes', 'reports working fine', 'whitelisting survey and jira-webhook endpoints', 'regression tests for reporting, web-app survey, and jira-webhook', 'looking into ongoing issue with Jira web hook test']","['authentication issue', 'restricted access to vault for all endpoints except /jira-webhook']",POSITIVE,[] 2023-07-17T12:48:41.990129,2023-07-17T22:00:05.281339,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,"- The user rtaraniganty is discussing the possibility of subscribing to multiple channels. - The user aganivada has added integration to both the channels mentioned by rtaraniganty. - The subscription management will be handled by operations@axiamatic.com, with pjha being copied on the communication.",2,"['multiple channel subscription', 'integration']",[],UNKNOWN,[] 2023-03-30T22:53:49.986749,2023-03-31T10:09:33.053529,ppant,"{'aganivada', 'rtaraniganty', 'ppant'}",3,"- There are vulnerability checks failing for the `py==1.11.0` package in Python 3.8 used in Gitlab pipelines. - Updating the Python version to 3.10.7 in pipelines fixes the security check. - AWS Lambda does not support Python 3.10, only versions 3.7 to 3.9. - The pipeline is fixed with security checks by using specific versions of packages. - The `py` package was cached in the Gitlab pipeline using pip's cache in CI. - The versions of various packages that work and will be used in all lambdas are listed.",15,"['vulnerability checks fixed', 'Python version updated', 'pipeline fixed with security checks', 'specific versions of packages used', 'package caching in Gitlab pipeline', 'working versions of packages listed']","['vulnerability checks failing', 'Python 3.10 not supported by AWS Lambda']",POSITIVE,[] 2023-04-26T17:41:11.572599,2023-04-26T20:47:08.249839,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,"- GitLab charges $10 for 1000 CICD minutes. - To justify the cost of their own runner ($60), they need to move 6000 minutes of workload to that runner. - Moving the automated-tests-v1 and a couple of other costly projects off shared runners will provide enough benefit. - They plan to disable the shared runner once they are done with the 0.9.10 validation. - They will attempt a proof of concept (POC) for turning on GL-runner on-demand.",2,"['GitLab charges $10 for 1000 CICD minutes', 'Moving workload to own runner', 'Disabling shared runner', 'Proof of concept for GL-runner on-demand']","['Cost of own runner', 'Costly projects on shared runners']",,[] 2023-01-31T07:25:13.484429,2023-02-02T07:22:46.754499,rvaidya,"{'aganivada', 'askumar', 'rvaidya'}",3,"- There is an issue where the core is throwing a 500 error instead of a 403 error when a ProductAdmin logs in without any permission in the permission set. - The Discovery API is still returning products because it is querying /pvt APIs, and some summary stats are being returned from the API on the apps. It is mentioned that the PID/Product Admin preauthorize check is not present in the PVT table. - There is a suggestion to define a global exception handler that returns a 403 error instead of a 500 error when an AccessDenied exception occurs. - It is mentioned that the validation for both the PVT and product instance API calls is the same, and no issues were found there. - Debug logs are suggested to be enabled to check if the headers/token are coming in correctly. - There is a log indicating that the X-Custom-Token-Header is empty or not set, which may be causing the issue. - It is discovered that there is an issue with the permission mapping and check, where an empty permission list is causing the interceptor to set the permissions as NULL, resulting in no authorization being applied. - It is mentioned that a UI fix is being worked on, but a backend fix is also needed. - The ticket DISC-2321 is suggested to be moved to PLAT for further work. - A fix has been deployed to STAGE and tested with the given users, where only the orgAdmin with product permission can see the products and the ProductAdmin cannot see the products for the given test account.",19,"['global exception handler', 'validation for PVT and product instance API calls', 'debug logs enabled', 'fix deployed to STAGE']","['core throwing 500 error instead of 403 error', 'Discovery API querying /pvt APIs', 'PID/Product Admin preauthorize check not present in PVT table', 'X-Custom-Token-Header empty or not set', 'permission mapping and check issue', 'UI fix needed', 'ticket DISC-2321']",MIXED,"['DISC-2321', 'DISC-2321', 'PLAT-1650', 'PLAT-1650']" 2023-06-23T13:14:39.176689,2023-06-23T15:17:37.593839,rtaraniganty,"{'rtaraniganty', 'ppant'}",2,"- There is a discussion about a change in the network/security configuration that may be affecting connections to RDS via sshuttle. - There is uncertainty about whether the change in the Perimeter 81 config is related to the issue. - It is mentioned that the security group for Aurora was added to allow connections with just VPN, without the need for sshuttle. - There is a concern that the change is not working as expected.",5,"['change in network/security configuration', 'security group for Aurora']","['connections to RDS via sshuttle', 'uncertainty', 'Perimeter 81 config', 'concern']",NEGATIVE,[] 2023-05-16T06:28:08.403699,2023-05-16T06:58:46.432079,aganivada,"{'aganivada', 'ppant'}",2,"- There is a build failure in the ""develop"" branch after merging HF (hotfix) changes in the ""tenancy"" branch. - The failure seems to be related to parallel execution of tests. - One specific test, `LinkVendorCustomerTest.testGetCustomersWithoutBeingOrgAdmin`, failed in the morning. - The test was annotated with `same_thread` to prevent it from running in multiple threads. - After retriggering the pipeline, the builds have been successful so far. - The team plans to revisit why the test fails when run in multiple threads.",9,['successful builds after retriggering pipeline'],"['build failure', 'failure related to parallel execution of tests', 'specific test failure', 'test annotated with same_thread', 'test fails when run in multiple threads']",,[] 2023-05-29T20:49:00.326999,2023-05-31T06:23:51.122639,aganivada,"{'aganivada', 'askumar'}",2,- Snowflake supports STS (Security Token Service) for data load from S3. - The team is considering using the STS approach for data integration with Snowflake. - The STS approach does not require adding a user and generating credentials for that user. - The team plans to test the STS approach in the INT environment. - The file load works and data is successfully populated in the INT environment. - The team is considering configuring the STS approach for the prod environment. - The integration with Snowflake in the prod environment will use the STS model similar to GitLab. - The setup for the prod account in Snowflake is complete. - Data from both the INT and PROD S3 environment buckets will be available in Snowflake. - The team has added an environment filter to the dashboard to filter data based on the environment. - The team expresses gratitude to pjha and aganivada for their help with AWS production work.,7,"['Snowflake supports STS', 'team considering using STS approach', 'file load works and data is successfully populated', 'team considering configuring STS approach for prod environment', 'integration with Snowflake in prod environment will use STS model', 'setup for prod account in Snowflake is complete', 'data from both INT and PROD S3 environment buckets will be available in Snowflake', 'team added environment filter to dashboard', 'team expresses gratitude to pjha and aganivada']","['STS approach does not require adding user and generating credentials', 'team plans to test STS approach in INT environment']",,[] 2023-05-09T11:53:11.433229,2023-05-09T22:41:14.028279,pjha,"{'aganivada', 'pjha'}",2,"- The discussion is about configuring a Group runner for Axiamatic Main. - The first step is to obtain the Registration token for the Group runner. - The Group runner needs to be registered on the EC2 instance and the GitLab runner process needs to be restarted. - The ""Run untagged jobs"" option needs to be enabled for the runner. - The shared runner for the project needs to be disabled. - A webhook needs to be created for the project. - One of the participants does not have permission to perform actions for Axiamatic Main and needs help with steps 1 and 3. - Another participant mentions that they now have the necessary permissions and can help after lunch.",3,"['Configuring Group runner', 'Obtaining Registration token', 'Registering Group runner on EC2 instance', 'Restarting GitLab runner process', ""Enabling 'Run untagged jobs' option"", 'Creating webhook for the project', 'Participant offering help after lunch']",['Participant lacking permission for Axiamatic Main'],,[] 2023-02-12T20:51:59.813159,2023-02-13T01:25:27.425789,aganivada,"{'aganivada', 'rsrinivasan', 'rvaidya', 'ppant'}",4,"- There is an issue with sending notifications, specifically an error message ""failed to send smtp unknown error"" with the cause being ""com.sun.mail.smtp.SMTPAddressFailedException: 501 Invalid RCPT TO address provided"". - The specific use case that triggered this issue is unknown. - The email address in the logs causing the error is """". - There is a need to investigate how the email address became invalid, possibly due to invalid characters in the email ID. - Email validation is not currently performed when adding a user to the team, except during bulk user imports. - A UI ticket will be created to add email validation. - It is unclear if the user's email can be updated directly from the backend or if the user needs to be removed and added again. - The tenant in question is a webapp tenant.",12,[],"['issue with sending notifications', 'error message: failed to send smtp unknown error', 'cause: com.sun.mail.smtp.SMTPAddressFailedException: 501 Invalid RCPT TO address provided', 'unknown use case triggering the issue', 'investigate how the email address became invalid', 'email validation not performed when adding a user to the team', 'UI ticket to add email validation', ""unclear if user's email can be updated directly from the backend"", 'tenant is a webapp tenant']",NEGATIVE,[] 2023-05-22T04:30:00.970569,2023-05-22T05:01:55.704879,aganivada,"{'aganivada', 'psomasamudram563', 'gshenoy', 'askumar'}",4,"- There is an issue with the ""createProductOmnibus"" method being flagged as read-only, causing errors after enabling routing. - The issue seems to be that the method is checking if a product exists, which is causing the entire transaction to be considered as read-only. - The transaction manager should only consider transactions marked as read-only as read-only. - A ticket has been created to address this issue. - The issue may be related to an internal AWS library problem. - The traceId for the issue is 00000000000000003c03a063612f66c4. - The issue was previously fixed, but it seems to have resurfaced. - If the library cannot fix the issue, the readonly flag may need to be set to false forcefully. - The discussion participants express gratitude for the help and resolution of the issue.",17,"['createProductOmnibus method', 'ticket created', 'issue fixed']","['read-only flag', 'errors', 'transaction manager', 'AWS library problem', 'issue resurfaced', 'readonly flag']",,['PLAT-2066'] 2023-01-02T12:18:22.300859,2023-01-02T22:20:16.321799,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,"- The user ""rtaraniganty"" added an epic to track flags that could/should be retired. - There are three platform flags with targeting set to off, but they are returning false. - These flags correspond to security-settings and SSO related features that are on the back-burner. - The decision to discard these features was made because they added more complexity than solving the purpose. - A task will be added to cleanup tenancy feature flags in the current sprint.",2,"['epic to track flags', 'cleanup tenancy feature flags']","['platform flags with targeting set to off', 'security-settings', 'SSO related features', 'complexity']",NEGATIVE,['HOUS-28'] 2023-03-28T11:44:46.585659,2023-04-03T08:21:02.554599,rtaraniganty,"{'pjha', 'rtaraniganty'}",2,"- The team is discussing the possibility of setting up a process to build and deploy Docker images using ECR (Elastic Container Registry). - They currently have scripts in place for building images locally and in GitLab, but they want to explore using ECR to save time. - The team is considering two approaches: creating the ECR repo, building the image locally, and pushing it to the repo before deployment, or providing the Dockerfile path to the CDK code to build and push the image to the ECR repo. - The downside of the second approach is that the ECR repo won't be destroyed when the stacks are destroyed. - The team agrees that the first approach is preferable and suggests including 'local' in the image name to facilitate cleanup of older images. - The team plans to implement this in the next sprint.",8,"['setting up process to build and deploy Docker images using ECR', 'exploring using ECR to save time', 'creating ECR repo and pushing image before deployment', ""including 'local' in image name for cleanup"", 'implementing in next sprint']","['downside of second approach', 'ECR repo not destroyed when stacks are destroyed']",POSITIVE,[] 2023-03-16T14:24:53.104389,2023-03-20T09:05:10.203339,rtaraniganty,"{'svummidi', 'bganganna', 'rtaraniganty'}",3,"- There is a concern that entities are not being deleted after tests. - The test code has been updated to track down tenants that are left behind. - Once the updated code is merged, it should be easier to identify the issue. - There are some test suites where publishers are created without teardown, but they do not have sanity or regression tags. - There are two keywords that are called in most of the app test cases and do not have teardown for deleting publishers. - The team plans to add publisher deletion as part of the teardown process.",9,"['Updated test code to track down tenants', 'Easier identification of the issue', 'Adding publisher deletion to teardown process']","['Entities not being deleted after tests', 'Test suites without teardown for creating publishers']",NEUTRAL,[] 2023-05-30T03:13:11.957109,2023-05-30T08:51:04.391719,ppant,"{'aganivada', 'rsrinivasan', 'rtaraniganty', 'ppant'}",4,"- The Notification-service's tests are not compatible with the latest versions of axm-commons. - There is an unsatisfied dependency exception in many test classes after updating axm-commons from version 0.0.8 to 0.0.17 in its pom. - The exception is caused by a missing bean of type 'org.springframework.boot.autoconfigure.security.oauth2.client.OAuth2ClientProperties'. - The TestConfig.java file in the GitLab repository may be related to the tenancy change. - Adding a mock s2sWebClient did not resolve the issue. - Comparing the application-test.yml file of the Notification-service with other services did not reveal any differences. - The issue seems to be specific to the tests, as the build works fine. - The necessary configurations for the OAuth2 client are provided in the application-test.yml file. - Adding the configurations and a mock bean fixed the issue. - A merge request (MR) has been opened to address the problem.",14,"['tests compatibility', 'unsatisfied dependency exception', 'missing bean', 'TestConfig.java', 'mock s2sWebClient', 'application-test.yml', 'build success', 'OAuth2 client configurations', 'merge request']",[],,[] 2023-09-13T08:44:09.085899,2023-09-13T08:45:37.701399,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,- There is a question about whether the story points question at the end of the retro was modified or if a bug caused the issue. - Aganivada suggests that it seems like a bug. - The conversation took place on the Slack channel.,2,[],[],NEUTRAL,[] 2023-03-30T17:01:00.347599,2023-03-30T17:32:37.465879,rtaraniganty,"{'svummidi', 'anair', 'rtaraniganty'}",3,- There is an exception on the ScoreProvider side for the EA Tenant. - The root exception is a QueryInterruptedException caused by an unsupported operation. - The issue is being discussed with the Druid support team. - There may be issues in the UI while trying to access the page due to EA data re-indexing.,6,"['Exception on ScoreProvider side', 'Issue being discussed with Druid support team', 'UI issues due to EA data re-indexing']",['QueryInterruptedException caused by unsupported operation'],UNKNOWN,[] 2023-04-17T23:34:29.540539,2023-04-17T23:51:58.361669,nsrivastava,"{'aganivada', 'nsrivastava'}",2,"- The user nsrivastava is experiencing a ""docker missing"" error in one of the pipelines. - aganivada explains that this error is due to the GL runner changes not being merged to the develop branch yet. - aganivada suggests redirecting the pipeline to a regular shared runner. - nsrivastava acknowledges the explanation and thanks aganivada. - aganivada informs nsrivastava that the issue is being worked on and once resolved, the GL-runner changes can be merged to develop. - nsrivastava expresses gratitude as the job succeeded with the suggested solution.",6,"['docker missing', 'GL runner changes', 'regular shared runner', 'job succeeded']",[],,[] 2023-04-05T17:14:43.605709,2023-04-05T21:28:02.689089,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,"- The user ""rtaraniganty"" mentions that whenever they log in to LD (presumably a software or platform), they receive a notice about the SDK reaching EOL (End of Life) and the need to upgrade. - ""rtaraniganty"" plans to upgrade in the next couple of sprints, indicating that they are aware of the importance of upgrading and are willing to do so. - The user mentions that there have been non-trivial changes made to the SDK, suggesting that the upgrade may involve significant modifications or improvements. - ""aganivada"" acknowledges the information shared by ""rtaraniganty"" and adds a ticket (PLAT-1908) to address the issue or track the upgrade process.",2,['SDK upgrade'],[],POSITIVE,['PLAT-1908'] 2023-08-10T22:37:16.853499,2023-08-11T00:31:20.427719,akasim,"{'akasim', 'aganivada'}",2,- Akasim is experiencing an error in the MB develop branch build. - Akasim is seeking help from the team. - Aganivada asks if the error is happening locally or in GitLab. - Akasim confirms that the error is occurring in GitLab. - Aganivada offers to investigate the issue. - Akasim expresses surprise that the build is successful locally. - Aganivada suggests a change that might resolve the issue. - Akasim acknowledges the suggestion. - Aganivada confirms that the build is now working. - Aganivada mentions a previous response that could be related to the root cause of the issue. - Akasim expresses gratitude for the help.,9,"['help', 'investigate', 'resolve', 'working', 'suggestion', 'acknowledge', 'gratitude']","['error', 'surprise', 'issue', 'root cause']",,[] 2023-03-02T02:55:40.935489,2023-03-02T08:55:08.255229,amishra,"{'aganivada', 'gshenoy', 'amishra'}",3,"- The user is experiencing an error message stating ""Could not find products by name. search-registry-product-error|809|Find products matching filter {name=ilike='tes'} failed"" while searching for products by name. - The error is occurring in the INT environment, but it is working fine in the Stage environment. - The error is not related to permission or RBAC issues. - The issue may be due to stale data in the INT environment, specifically records with wrong product types. - The error is caused by some products being inserted with names instead of IDs for the product type in the integration database. - The suggestion is to try using different text to see if the issue persists. - The cleanup of the wrong values in the integration environment has been performed. - Other strings are working fine for the user. - The cleanup of the wrong values was done by gshenoy.",11,"['Error message', 'Working fine in Stage environment', 'Not related to permission or RBAC issues', 'Stale data in INT environment', 'Cleanup of wrong values in INT environment', 'Other strings working fine', 'Cleanup performed by gshenoy']","['Error occurring in INT environment', 'Products inserted with names instead of IDs', 'Issue persists with different text']",NEGATIVE,[] 2023-07-13T06:56:31.686259,2023-07-13T09:36:27.691229,rsrinivasan,"{'aganivada', 'rsrinivasan', 'rtaraniganty', 'mnirmal'}",4,- There was a gap in availability for the pulse manager service in INT. - The restart of the pulse manager service was managed by ECS. - The restart occurred between 7:20 PM and 7:30 PM. - The CPU utilization was high during this time. - There may be a memory leak or issue causing the memory usage of the pulse manager to increase over time. - A cloudwatch alert was added to monitor the memory usage and a memory dump was requested for comparison.,24,"['Restart of pulse manager service', 'Management of restart by ECS', 'Cloudwatch alert for memory usage', 'Memory dump request']","['Gap in availability for pulse manager service', 'High CPU utilization', 'Memory leak or issue']",NEGATIVE,['COLL-2713'] 2023-05-08T00:51:38.230219,2023-05-08T00:58:52.321769,sjain,"{'aganivada', 'ppant', 'sjain'}",3,"- The builds for sentiment-collector are failing while publishing a docker image. - The error is related to the argument --role-arn, which is expected to have one argument. - The branch needs to be marked as protected in order to access some GitLab variables. - By default, only the develop, main, and HF branches are marked as protected. - The necessary permission to publish the image can be obtained by marking the branch as protected. - The third pipeline build used to be ""deploy build,"" but the CI/CD pipeline has changed and now only has two builds. - The issue has been resolved and the pipeline has been re-triggered.",6,"['builds for sentiment-collector', 'error resolution', 'pipeline re-triggered']","['failing builds', 'docker image publishing error', 'argument --role-arn', 'branch protection', 'missing deploy build']",POSITIVE,[] 2023-08-11T10:54:08.652279,2023-08-14T06:11:12.036479,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,- rtaraniganty suggests running askai build on the EC2 runner. - aganivada mentions attempting to move askai to GL-runner but reverted back. - aganivada asks pjha to take a look at the issue with the project based runner not picking up the job and the webhook not working.,3,"['rtaraniganty suggests running askai build on EC2 runner', 'pjha asked to look at the issue with project based runner']","['aganivada attempted to move askai to GL-runner but reverted back', 'project based runner not picking up the job', 'webhook not working']",,[] 2023-06-28T04:08:51.285219,2023-06-28T10:59:00.330809,ppant,"{'aganivada', 'rtaraniganty', 'ppant'}",3,"- ppant made too many commits for multi-instance changes, causing Gitlab's runner to block tenancy. - There were 6-7 failures in the previous attempts. - Switching to a shared runner is suggested as a solution. - ppant's MR's branch is causing the issue. - The MR is dependent on core-data's 2052-SNAPSHOT version, which got deleted from the package registry after 2 days. - The artifact reaper job log shows that the timestamp for the deleted version is 2023-05-19T14:40:52.774Z. - Snapshots older than 21 days are deleted. - It is unclear if an alert was received when ppant ran into this issue.",13,"['Switching to a shared runner', 'Artifact reaper job log', 'Alert received']","[""ppant's excessive commits"", 'Gitlab runner blocking tenancy', 'MR branch causing the issue', 'Deleted version of core-data', 'Failures in previous attempts']",NEGATIVE,"['PLAT-2052', 'PLAT-2052']" 2023-05-25T09:04:16.536229,2023-05-25T09:13:04.516609,aganivada,"{'aganivada', 'pjha'}",2,- The question is about finding all the stacks that are deployed via multi-env. - The slack message does not specify the deployment version. - It is possible to add a tag for all multi-instance deployments in AWS. - The tags can be added to the resources created through multi-instance-deployment.,4,"['finding all stacks deployed via multi-env', 'adding tags for multi-instance deployments in AWS', 'adding tags to resources created through multi-instance-deployment']",['slack message not specifying deployment version'],,[] 2023-08-03T10:42:30.001809,2023-08-11T14:05:48.070349,nsrivastava,"{'rsrinivasan', 'nsrivastava', 'rtaraniganty'}",3,"- The user is trying to publish an npm package onto GitLab registry but is unable to overwrite the same version. - They tried unpublishing the package but received an error stating that the last version cannot be deleted and it will block republishing for 24 hours. - One option suggested is to append a GUID in the snapshot versions to avoid the 24-hour timeline. - Another user suggests deleting the package explicitly using a Python script. - The user is hoping that there is a configuration option on GitLab registry to adjust this behavior, but a Google search did not yield any results. - A team member opened a case with GitLab to inquire about this issue. - GitLab responded stating that they have added a unique index to prevent having more than one npm package with the same name, version, and project ID. - The GitLab product team is working on implementing a feature to enable/disable duplicate package uploads. - Currently, duplicates are allowed for other package types like Maven, Generic, Helm, NuGet, and Conan, but not for npm packages. - The user is asked to upvote and comment on an existing feature request for allowing/disallowing duplicate npm package uploads.",10,"['npm package publishing', 'GitLab registry', 'overwriting package version', 'appending GUID in snapshot versions', 'deleting package using Python script', 'configuration option on GitLab registry', 'opening a case with GitLab', 'unique index to prevent duplicate packages', 'implementing feature to enable/disable duplicate package uploads', 'allowing/disallowing duplicate npm package uploads']","['unable to overwrite package version', 'error while unpublishing package', '24-hour timeline for republishing', 'no configuration option found on GitLab registry', 'Google search did not yield results']",POSITIVE,[] 2023-07-03T06:19:30.415719,2023-07-03T07:28:46.852119,aganivada,"{'aganivada', 'pjha'}",2,- There is a discussion about updating the container name for mult-instance. - Code changes are needed to update the container name. - The commit for the code changes is provided for review. - The changes are approved and can be merged to the develop branch. - There are no known risks associated with merging the changes.,5,"['updating container name', 'code changes', 'commit review', 'changes approved', 'merge to develop branch']",['known risks'],,[] 2023-09-04T00:28:50.681339,2023-09-04T02:16:54.147239,aganivada,"{'aganivada', 'U05PCBD9SHE', 'psomasamudram563', 'pjha'}",4,"- There is a licensing issue with Perimeter 81, causing a limit of 31 accounts instead of the needed 32. - The issue is preventing the addition of a user named Vishal to Perimeter 81. - A support ticket has been raised with Perimeter 81 to resolve the issue. - The user <@U05PCBD9SHE> still has access to the int and stage UI. - Dejan's account can be removed from Perimeter 81 since he is no longer with the company. - The user <@U05PCBD9SHE> does not have access to the VPN, but has access to DB and AWS. - The P81 policies are disabled in the stage environment and will be enabled later. - Vishal is not able to access the AWS console or DB in the INT environment without the VPN. - Anandhi does not have an account provisioned for her in Perimeter 81.",15,"['support ticket raised', ""Dejan's account removal"", 'P81 policies enabled later']","['licensing issue', 'Vishal unable to join', 'access issue for <@U05PCBD9SHE>', 'Vishal unable to access AWS and DB', 'Anandhi account not provisioned']",NEGATIVE,[] 2023-08-24T09:33:38.879419,2023-08-27T20:39:46.044769,aganivada,"{'aganivada', 'araman'}",2,- There is a request to rename the vendor admin role in the dashboard. - The ticket for this request has been added for version 0.9.14 HF. - The core-data-commons and dashboard-app pom will be updated as part of this change. - Araman wants to be assigned the ticket when it is QA-Ready. - The change has been deployed to the stage.,4,"['request to rename vendor admin role', 'ticket added for version 0.9.14 HF', 'change deployed to stage']",[],,['PLAT-2479'] 2023-04-06T03:39:59.194859,2023-04-06T05:11:32.565209,askumar,"{'aganivada', 'askumar'}",2,"- The discussion is about a demo for AWS Aurora Read Write DB. - askumar requests aganivada to provide a voiceover for the demo. - aganivada agrees to provide the voiceover and asks for specific points to cover. - The specific points mentioned are: 1. If someone attempts a write operation on a method flagged as read-only, the operation will fail. 2. Changes in the application.yml file where the read path and read-write path are mentioned. 3. Local configuration where there is only one type of instance.",3,"['AWS Aurora Read Write DB', 'Voiceover request', 'Agreement to provide voiceover', 'Specific points for voiceover']","['Failed write operation', 'Changes in application.yml file', 'Local configuration with only one instance']",POSITIVE,[] 2023-07-11T22:36:58.535949,2023-07-11T23:03:12.757359,rtaraniganty,"{'pjha', 'rtaraniganty'}",2,"- The minimum charge for ACUs is 8 per hour. - The cost for 8 ACUs per month is $87.6. - For 5 clusters across 3 environments, the minimum cost would be $438 per month. - The suggestion is to try using the minimum number of ACUs in the INT environment to potentially save money. - The response confirms that there will be a minimum charge of 8 ACUs per hour.",2,"['minimum charge for ACUs', 'cost for 8 ACUs per month', 'minimum cost for 5 clusters across 3 environments', 'suggestion to save money', 'confirmation of minimum charge for ACUs']",[],,[] 2023-05-01T23:03:10.160799,2023-05-08T23:38:59.570349,askumar,"{'aganivada', 'rtaraniganty', 'rsrinivasan', 'askumar'}",4,"- The team encountered an issue while migrating data via AWS Data Migration service. - The migration task was able to copy table data and primary keys, but not additional table metadata such as foreign key constraints. - They discussed using pg_dump for schema and constraints migration, which seemed to work fine for core data but took a long time for copying BLOBS in the commons database. - They considered using the database engine's native tools for migrations and dropping foreign keys, primary keys, and indexes before restore to reduce migration time. - It was mentioned that AWS DMS only supports basic schema migration and does not automatically create or migrate objects such as secondary indexes, foreign keys, user accounts, non-primary key constraints, sequences, default values, stored procedures, triggers, synonyms, etc. - They discussed the possibility of taking a snapshot of an RDS instance with specific schemas and restoring it to another instance, but it was not possible to create a snapshot or migrate to an Aurora cluster with only specific schemas. - Multiple snapshots from different RDS instances cannot be migrated to one target Aurora instance. - AWS suggested either adding constraints manually or using the pg_dump approach for migration. - The team received an update from AWS support, which provided clarification on the limitations and recommended approaches for migration. - They mentioned that they just need to migrate the liquibase change logs table separately to complete the cycle.",20,"['successful table data and primary key migration', 'consideration of using native database tools for migration', 'clarification from AWS support', 'liquibase change logs table migration']","['issue encountered during data migration', 'inability to migrate additional table metadata', 'long time taken for copying BLOBS in commons database', 'limitations of AWS DMS for schema migration', 'inability to create snapshot or migrate to Aurora cluster with specific schemas', 'inability to migrate multiple snapshots to one target Aurora instance']",MIXED,[] 2023-06-27T00:39:35.183979,2023-06-27T00:40:24.769859,aganivada,"{'aganivada', 'pjha'}",2,"- Aganivada is requesting PJha to add Askumar to the stage and share credentials with Ashwani for a DB migration. - They also mention the need for a task on PJha's name to provision an Aurora instance in the stage. - Once the migration is complete, they plan to review IAM roles. - RTaraniganty is copied on the conversation.",2,"['DB migration', 'provision Aurora instance', 'review IAM roles']",[],,[] 2023-05-20T01:11:56.406549,2023-05-20T02:01:49.058939,aganivada,"{'aganivada', 'askumar'}",2,"- There are instances provisioned in the ""int"" environment that need to be checked. - Some clusters that are not being used have been stopped and are being deleted. - There is a database called ""vector-db-testing"" being used by rsrinivasan for testing.",3,"['instances provisioned in int environment', 'clusters being stopped and deleted', 'database vector-db-testing being used for testing']",[],,[] 2023-02-24T04:08:53.173869,2023-03-07T05:24:44.979809,rsrinivasan,"{'aganivada', 'rsrinivasan'}",2,- The discussion is about sharing a link to a feature flag (FF) related to global user permissions compact model targeting. - Rsrinivasan shares the link with Aganivada. - Aganivada expresses gratitude for the information.,4,"['feature flag', 'global user permissions compact model targeting']",[],,[] 2023-07-06T03:15:30.550119,2023-07-06T03:44:44.855809,ppant,{'ppant'},1,"- ppant is facing a 401 unauthorized issue while setting up a CI/CD pipeline for the vms-askai-service. - ppant added the .ci-settings.xml file to resolve the issue, but the 401 error still persists. - cc: data team is mentioned in the conversation. - ppant mentions that adding the .ci-settings.xml file fixed the 401 issue related to relative path problems. - ppant expresses gratitude to aganivada for their help in resolving the issue. - A link to a Slack conversation with more context is provided.",2,"['CI/CD pipeline setup', '401 unauthorized issue', 'relative path problems', 'gratitude']",[],,[] 2023-04-26T20:31:11.384239,2023-04-26T20:47:43.800449,rsrinivasan,"{'aganivada', 'svummidi', 'rsrinivasan'}",3,"- There is a discussion about deploying a new version of the estimation service to the stage environment. - The current version being used is 0.0.9.1, but there is a new version available, 0.0.9.2. - There is a specific fix for the time line view that needs to be included in the deployment. - Aganivada agrees to deploy version 0.0.9.2 to the stage environment.",3,"['deploying new version', 'estimation service', 'stage environment', 'version 0.0.9.2', 'specific fix', 'time line view', 'Aganivada agrees']",['version 0.0.9.1'],,[] 2023-05-15T23:29:46.384089,2023-05-15T23:55:35.212519,anair,"{'aganivada', 'anair'}",2,- The team needs to start preparing for 1-2 Okta integrations. - Documentation work for Okta has already been done. - Rsrinivasan has added articles related to testing SSO and SAML Enterprise Connection in Okta.,3,"['Okta integrations', 'Documentation work', 'Testing SSO', 'SAML Enterprise Connection']",[],,[] 2023-05-21T09:06:37.884949,2023-05-23T08:52:40.430879,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,"- rtaraniganty made changes to the latest-images script to sort the output by date and print more than one entry if needed. - The latest-images script is used to retrieve information about the latest releases of different services. - The script is used with the `--aws-profile` flag to specify the AWS profile to use and the `--excl-snapshots` flag to exclude snapshots from the output. - The output of the script includes the service name, release date, and release version. - The admin-gw builds are now named with ""SNAPSHOT"" for non-main branches to identify release builds. - The script is helpful in verifying builds without having to go to GitLab.",3,"['rtaraniganty', 'latest-images script', 'sort output by date', 'print multiple entries', 'retrieve latest releases', 'AWS profile', 'exclude snapshots', 'output format', 'admin-gw builds', 'SNAPSHOT naming', 'build verification', 'GitLab']",[],,[] 2023-02-21T02:19:26.439079,2023-02-22T02:40:23.923289,ppant,"{'aganivada', 'ppant'}",2,"- Auth0 only supports one email template per use case. - There are two possible ways to send an email based on a different template for re-invitation. - Using a different template type for re-invitation requires calling Auth0's verification API instead of the invitation API. - Updating the same template and using Auth0's common variables to categorize the email as invite or reinvite is another option. - Storing the invite type in user metadata in Auth0 and reading it in the email template can be done. - Using app_metadata instead of user_metadata is suggested, as it doesn't make it to tokens. - The cleanup of data when a user accepts the invitation needs to be managed. - Checking the Auth0 community for better ways to handle re-invitations is recommended. - Disabling the email notification from Auth0 for re-invites can be done by marking `send_invitation_email=false` in the request body to the invitation API. - However, creating the invitation link manually is not possible in this case. - It is suggested to check if there is a way to get the invitation link by calling the Auth0 API.",5,"['Different email templates for re-invitation', 'Using verification API for re-invitation', 'Updating template with common variables', 'Storing invite type in user metadata', 'Using app_metadata instead of user_metadata', 'Managing data cleanup after invitation acceptance', 'Checking Auth0 community for better ways', 'Disabling email notification for re-invites', 'Finding a way to get invitation link from Auth0 API']","['Auth0 only supports one email template per use case', 'Creating invitation link manually not possible']",NEUTRAL,[] 2023-05-05T15:47:18.929839,2023-05-05T20:32:23.310109,anair,"{'rsrinivasan', 'anair'}",2,- EA is interested in SSO (Single Sign-On) with Okta. - The team has already prepared for Okta integration. - There is a link provided for more information on the Okta integration.,2,"['SSO with Okta', 'Okta integration']",[],,[] 2023-04-19T08:18:42.888679,2023-04-19T09:25:45.302549,askumar,{'askumar'},1,"- The user ""askumar"" is conducting a test for data migration for the core-stack in INT. - The user notifies others in the channel that responses for API calls might be delayed during the testing. - The user later updates that the testing has been completed.",2,[],[],NEUTRAL,[] 2023-03-24T08:24:09.439519,2023-03-24T10:44:31.051399,ppant,"{'rtaraniganty', 'ppant'}",2,- ppant created a how-to document for common issues with Poetry. - rtaraniganty appreciates the document and expresses gratitude.,2,"['how-to document', 'common issues', 'Poetry']",[],,[] 2023-08-19T04:58:01.934289,2023-08-20T21:23:51.594089,aganivada,"{'aganivada', 'rtaraniganty'}",2,- There is a discussion about closing the CDK release branch. - The suggestion is to close the branch on Monday. - The reason for keeping it open till Monday is not mentioned. - One person agrees with keeping it open till Monday. - Another person suggests closing the branch today. - The reason for closing the branch today is not mentioned.,3,['closing CDK release branch on Monday'],['closing CDK release branch today'],,[] 2023-07-10T07:11:22.698019,2023-07-10T07:12:37.330429,sranjan,"{'pjha', 'sranjan', 'askumar'}",3,- sranjan is having trouble accessing AWS. - pjha suggests checking if sranjan is connected to VPN. - askumar advises trying to access AWS with VPN. - sranjan thanks pjha and askumar for their input.,4,['AWS access troubleshooting'],[],,[] 2023-04-26T00:52:07.270939,2023-04-26T20:34:55.243259,bganganna,"{'svummidi', 'bganganna'}",2,"- The discussion is about documenting the steps for reprocessing open text comments in production to populate the date. - Svummidi mentions that they have a draft for the documentation and will update and publish it. - Bganganna asks if there is a release recipe document that should include this step as a post-deployment requirement. - Svummidi explains that the usual pattern is to cover the feature with a feature flag, update tenant-specific comments, enable the feature flag, update all tenant comments, and then enable the flag for all tenants. - It is mentioned that this activity should be done as a post-release activity, not as part of the release recipe.",5,"['documenting steps', 'reprocessing open text comments', 'populate the date', 'draft for documentation', 'update and publish', 'usual pattern', 'feature flag', 'update tenant-specific comments', 'enable feature flag', 'update all tenant comments', 'enable flag for all tenants', 'post-release activity']",['release recipe document'],NEUTRAL,[] 2023-04-05T04:45:20.610419,2023-04-12T08:38:37.411319,araman,"{'aganivada', 'anair', 'araman', 'rtaraniganty'}",4,"- There is a need to clean up old entries/records that are not categorized into sections in production. - The ""Create And Verify Query AuditLog"" test failed in production due to deploying lambdas from the develop branch. - It is suggested to release the lambda code during or after the branch-cut to ensure a working version in the main branch and also tagged. - It is recommended to include branch names in the package names of lambdas to avoid overwriting what is in the main branch. - The issue with empty section records can be taken up and resolved. - The system_user issue needs to be fixed. - There is a discussion about sharing credentials to see the records.",18,"['clean up old entries/records', 'release lambda code', 'resolve issue with empty section records', 'fix system_user issue']","['Create And Verify Query AuditLog test failed', 'deploying lambdas from develop branch', 'overwriting main branch', 'discussion about sharing credentials']",,"['PLAT-1887', 'PLAT-1887']" 2023-07-31T22:46:02.739819,2023-08-01T00:21:28.700779,pjha,"{'aganivada', 'pjha', 'askumar'}",3,"- The RDS Proxy reader endpoint is connecting to both the underlying reader replica of the RDS cluster. - The activity on the reader endpoint of PROXY is being checked. - There is no activity for the query on the RDS writer endpoint. - The metrics pattern looks the same for both the reader instances. - It is possible to force the reader instance from property/env-var if the proxy is enabled. - The environment variables can be accessed to decide whether to populate the proxy or the actual details using the cluster name. - Routing other services to the proxy can be done with Prashant's CDK changes. - The pulse manager is currently the most active service in int. - The discussion on adding the proxy to services will be held during planning. - A merge request will be submitted for the changes made by pjha, and the decision to merge will be based on any chances for regression. - The decision to enable the proxy can be made by service and environment. - The option to connect to the proxy or not has been added in the cdk.json file.",20,"['RDS Proxy', 'reader endpoint', 'activity checking', 'metrics pattern', 'force reader instance', 'environment variables', 'routing services', 'pulse manager', 'merge request', 'enable proxy', 'connect option']","['no activity on writer endpoint', 'discussion on adding proxy', 'chances for regression']",,[] 2023-05-04T22:16:17.157669,2023-05-05T00:25:01.872059,ppant,"{'aganivada', 'rtaraniganty', 'ppant'}",3,"- The team has made changes in the openApi generated ApiClient to support reading endpoint configurations for backend support for multiple instance deployment. - Currently, the endpoint configurations are being done via application.yml, but they can be moved to SSM to avoid updating the yml file of each service. - The endpoint configurations are based on tenant IDs, and requests will be routed accordingly. - If a service name or tenant ID is not present in the configuration, the request will go to the default endpoint of the environment. - The team is validating the flow with reactive app as well. - The port value cannot go beyond 65535, but for the PoC, random values were used for the port and tenant IDs. - The team has tested the flow with the onboarding app by explicitly setting the tenant IDs in the thread context, and it seems to work fine.",7,"['changes in openApi generated ApiClient', 'support for reading endpoint configurations', 'backend support for multiple instance deployment', 'moving endpoint configurations to SSM', 'routing requests based on tenant IDs', 'validating flow with reactive app', 'testing flow with onboarding app']","['port value exceeding 65535', 'using random values for port and tenant IDs']",POSITIVE,[] 2023-04-02T20:53:07.220499,2023-04-03T21:12:30.367229,aganivada,"{'aganivada', 'rsrinivasan', 'rtaraniganty'}",3,- The discussion is about the Haleon SSO setup and the use of SignatureMethod and DigestMethod in the SAML-response. - It is mentioned that the configuration picks up the default values for these methods. - There is a concern about the algorithm chosen for calculating the digest and whether it is causing issues. - Auth0 logs indicate that there is an invalid thumbprint for the certificate. - It is suggested that this could be a configuration issue and further validation is needed with the customer. - A link to the Auth0 error log is provided for reference. - Seshan mentions that the issue may be related to the customer's configuration and suggests reaching out to them for clarification.,5,"['Haleon SSO setup', 'Default values for SignatureMethod and DigestMethod', 'Configuration issue']","['Invalid thumbprint for certificate', 'Algorithm for calculating digest causing issues', ""Customer's configuration issue""]",,[] 2023-05-02T22:06:33.068739,2023-05-03T02:55:19.435209,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,- Today is araman's 1-year anniversary at Axiamatic. - Anu is being congratulated for this milestone. - The team members rtaraniganty and aganivada are expressing their congratulations.,2,"['anniversary', 'congratulations']",[],,[] 2023-04-09T23:34:21.234639,2023-04-10T00:06:49.182019,ppant,"{'aganivada', 'ppant'}",2,- A document was created in January for refactoring Tenancy. - The document will be used to keep track of all the refactoring related tasks and notes. - The refactoring will follow a core-data-esque delegate-helper design pattern. - There are a couple of delegate methods that are called directly from other services. - Aganivada will take up the refactoring of usersapiservicedelegate and VendorApiServiceDelegate.,2,"['refactoring Tenancy', 'core-data-esque delegate-helper design pattern', 'delegate methods', 'Aganivada']",[],,[] 2023-03-16T14:29:07.216069,2023-03-16T20:42:16.892079,rtaraniganty,"{'aganivada', 'pmangalapuri', 'rtaraniganty'}",3,"- The team is discussing the order of priorities for different tasks, specifically regarding the automatic deployment of builds and the ability to deploy multiple instances of a service during development. - It is suggested that the team should focus on completing the ""add ability to deploy multiple instances of a service during development"" task before considering the automatic deployment of builds. - The reason for this suggestion is that having support for multi-instance deployment would prevent the automatic deployment from breaking someone else's test. - There is a critical requirement to enable apps to communicate with multiple backend instances using a header. - The team is considering adding the header from the UI code based on a local storage item. - It is mentioned that configuring the backend domain can be challenging, and the team is exploring the possibility of using an ALB (Application Load Balancer) to route requests based on the header. - The team plans to manually test if it is possible to route requests to specific instances based on the header using ALB. - The team is interested in trying out this approach and seeing if it can be implemented using the CDK (Cloud Development Kit).",6,"['ability to deploy multiple instances of a service during development', 'support for multi-instance deployment', 'enabling apps to communicate with multiple backend instances', 'adding header from UI code based on local storage item', 'exploring the possibility of using an ALB (Application Load Balancer) to route requests based on the header', 'manually testing routing requests to specific instances based on the header using ALB', 'implementing using the CDK (Cloud Development Kit)']","['automatic deployment of builds', ""breaking someone else's test"", 'challenging configuration of backend domain']",,['PLAT-1589'] 2023-09-13T03:34:55.005059,2023-09-13T10:14:52.792219,aganivada,"{'aganivada', 'bkathi', 'rtaraniganty'}",3,"- aganivada is experiencing an error when using the /axmd-oc command. - bkathi suggests invoking the endpoint via curl to troubleshoot the issue. - bkathi discovers that the whitelist is not working, resulting in an ""Unauthorized"" response. - bkathi shares a curl command to test the endpoint. - rtaraniganty suggests redeploying from the release branch. - aganivada confirms that the slack commands are now working. - bkathi mentions that there is a message with ""null"" and plans to replace it with a more appropriate response.",9,['slack commands are working'],"['error with /axmd-oc command', 'whitelist not working', 'Unauthorized response', 'null message']",,[] 2023-05-18T00:15:32.359469,2023-05-18T01:30:57.301169,aganivada,"{'aganivada', 'pjha', 'bganganna'}",3,"- There is an issue with GL-runner where authentication redirects are not working for the production environment, but they work fine for other environments. - The team is investigating the issue and planning to check it with the GL-runner. - The issue seems to be related to bot-protection enabled in the production environment, causing Auth0 to think that the calls from GL-runner are coming from a bot. - The team is considering whitelisting the IP on Auth0 to resolve the issue. - Once the issue is resolved, a sanity test will be triggered on the production environment. - Auth0 was prompting captcha to the headless browser, causing issues. - The team is aware of the issue and will continue to monitor it.",11,"['investigating issue', 'planning to check with GL-runner', 'considering whitelisting IP on Auth0', 'sanity test on production environment', 'team aware and monitoring']","['issue with GL-runner authentication redirects', 'bot-protection causing Auth0 issues', 'Auth0 prompting captcha to headless browser']",NEGATIVE,[] 2023-05-08T17:11:48.496209,2023-05-09T09:04:02.565019,rtaraniganty,"{'rtaraniganty', 'ppant'}",2,"- The user is encountering a FileNotFoundError when trying to access the 'axm_services.json' file in the 'platform_libs_epoch/configuration' directory. - The user is also experiencing an issue with the 'axm_http' module, specifically when the response is not a 200 OK status code. - The user suggests moving away from the JSON approach and instead using a Python dictionary to make the calls from any client of the library. - The user mentions that the JSON file should ideally be incorporated into the package, but they are open to making it a Python dictionary. - The user asks if the changes can be made to work from any lambda. - The user provides a link to the temporary merge request where the changes have been made and states that it seems to be working fine.",17,"['Working with JSON files', 'Using Python dictionaries', 'Incorporating JSON file into package', 'Changes working from any lambda']","['FileNotFoundError', ""Issue with 'axm_http' module""]",POSITIVE,[] 2023-08-15T22:28:54.970809,2023-08-15T23:07:31.566859,aganivada,"{'aganivada', 'ppant'}",2,- The discussion is about adding a default with error response in a method. - The default invitation type as specified in the API specs is always INVITE. - The default block won't be executed because the default invitation type is always INVITE. - The decision to add the default block is to be on the safe side. - The control of the default invitation type is at the API level.,4,"['adding default with error response', 'default invitation type', 'API specs', 'safe side']","['default block not executed', 'control of default invitation type']",NEUTRAL,[] 2023-02-10T13:30:50.127899,2023-02-13T07:05:29.246579,akalyta,"{'akalyta', 'svummidi', 'aganivada'}",3,"- The person is trying to add a secret key to CDK and ensure it is passed to the application on initialization. - They have stored the secret key in secrets manager and are trying to retrieve and pass it to the application. - They have checked the `base-infra/bootstrap/setup-basics.sh` script and found existing functions for other services, but the secret key is not being retrieved and passed to the application. - They have added the ""OPENAI_KEY"" string to `env-vars` in `estimation-service/cdk.json` to fix the issue. - They are unsure about pushing changes for `base-infra`. - They found the `base-infra/bootstrap/setup-functions.sh` script that records secrets, but it is unclear if it needs to be run manually or if they are looking in the wrong place. - They are suggested to make changes in the estimation service scripts to ensure the secret key is passed correctly. - They are considering making a merge request (MR) for base infra to test if adding the ""OPENAI_KEY"" worked. - The setup-functions script has already been executed in INT. - They are advised to execute their changes to the service and if it works, submit an MR. - They have encountered an issue where the setup-functions script says the OpenAI key already exists, assuming it was provided by someone else. - They have added the OPENAI_KEY to the estimation cdk.json file and need to push the change to see if it works. - They have resolved the issue with the help of aganivada.",9,"['Adding secret key to CDK', 'Storing secret key in secrets manager', 'Retrieving secret key from secrets manager', 'Adding OPENAI_KEY to env-vars in cdk.json', 'Making changes to estimation service scripts', 'Executing changes to the service', 'Resolving the issue']","['Uncertainty about pushing changes for base-infra', 'Unclear about setup-functions script', 'Encountering issue with setup-functions script', 'OpenAI key already exists', 'Pushing changes to estimation cdk.json']",POSITIVE,[] 2023-07-21T16:54:14.559019,2023-07-24T10:46:53.446309,svummidi,"{'svummidi', 'rsrinivasan'}",2,"- Akalyta faced an issue with deserializing one of the core data responses for UserAttributesDTO. - The generated code for metadata is failing if the value is a string. - It is expected to generate the metadata type as Optional[Dict[str, Any]] instead of Optional[Dict[str, Dict[str, Any]]]. - In Java, the generated code for metadata is using Map. - There are two options to resolve the issue: modify the core openapi json or modify the copy of json as a stopgap solution. - The purpose of the flag allows non-dictionary type objects like strings as values. - It is suggested to change the core-data spec if the purpose of the flag is to allow non-dictionary type objects.",5,"['deserialization issue', 'generated code', 'metadata type', 'core openapi json', 'copy of json', 'flag purpose']",['core data spec'],NEUTRAL,[] 2023-06-13T23:31:44.538569,2023-06-14T21:21:14.930579,pkarthikeyan,"{'pkarthikeyan', 'hchintamreddy', 'rtaraniganty', 'ppant'}",4,"- There have been occurrences of JDBC connection errors in the token-manager. - The Spring health endpoint has also shown warnings about health checks taking longer than the threshold. - Similar observations have been made in Lightstep and score-provider. - One possible cause of the issue is the connection pool being maxed out, leading to longer query execution times and holding onto threads. - Increasing the max limit of the connection pool size could be a potential solution. - It is unclear why a new connection was being set up despite no spike in used connections in the Hikari connection pool metrics. - There was a similar occurrence in production last week, but the logs were lost due to the retention limit. - It is possible that the issue is a transient network issue or a hiccup in RDS. - Enabling debug logging for Hikari and tracking network failures in Fargate could provide further information.",12,"['Increasing max limit of connection pool size', 'Enabling debug logging for Hikari', 'Tracking network failures in Fargate']","['JDBC connection errors in token-manager', 'Warnings about health checks taking longer', 'Observations in Lightstep and score-provider', 'Connection pool being maxed out', 'New connection being set up despite no spike in used connections', 'Logs lost due to retention limit', 'Transient network issue or hiccup in RDS']",NEGATIVE,[] 2023-06-15T18:44:51.330729,2023-06-15T22:06:55.841289,aganivada,"{'aganivada', 'pjha'}",2,- There is a concern about jobs getting stuck for a certain amount of time. - The possibility of automatically switching the pipeline to a shared runner if a job is stuck is being discussed. - There is uncertainty about whether it is possible to switch pipelines during a running pipeline. - The need for documentation on reverting changes to allow builds to run back on shared runner is mentioned. - A link to documentation on enabling/disabling GitLab Group for a specific project is provided.,5,"['possibility of automatically switching pipeline', 'documentation on reverting changes']","['jobs getting stuck', 'uncertainty about switching pipelines', 'need for documentation on enabling/disabling GitLab Group']",,[] 2023-07-12T21:15:28.497939,2023-07-12T22:43:25.587249,pjha,"{'aganivada', 'pjha'}",2,- The team is discussing a postgres engine version upgrade. - Strategy 1 involves performing an upgrade dry run. - There will be no downtime until step 9 of the upgrade process. - The team can test 3 production DB clusters for any inconsistencies. - The team is aware that they can perform tests on the production DB without downtime.,4,"['postgres engine version upgrade', 'upgrade dry run', 'no downtime', 'test production DB clusters', 'perform tests without downtime']",[],,[] 2023-04-12T08:58:26.697039,2023-04-12T10:07:15.128419,aganivada,"{'aganivada', 'ppant'}",2,"- The discussion is about an issue with Spring ignoring transactional annotations. - One user, ppant, faced a similar issue with tenancy's tenant rollback due to a duplicate name. - Aganivada suggests splitting the code into a helper class and using annotations on the helper. - Ppant agrees that this solution should work but mentions running into a major refactoring issue in the past. - Aganivada mentions asking askumar for further input.",5,"['Spring ignoring transactional annotations', 'splitting code into a helper class', 'using annotations on the helper']","['duplicate name', 'major refactoring issue']",,[] 2023-03-14T00:04:22.829639,2023-03-14T02:13:06.844349,ppant,"{'aganivada', 'rsrinivasan', 'ppant'}",3,"- The discussion is about lambda integrations with backend services in VPC. - Option 2 is considered better due to changes and extra cost involved in routing requests through private API Gateway. - Microservices interact with each other through a private domain, not via API Gateway. - The format for the private domain is "".services..exp-pinstripe.local"". - Option 2 makes sense based on the discussion.",8,"['Option 2', 'private domain']","['extra cost', 'routing requests through private API Gateway']",,[] 2023-07-26T02:51:55.404429,2023-07-26T03:04:05.973949,aganivada,"{'aganivada', 'ppant'}",2,- The person named aganivada is asking ppant about the status of multi-instance by tenant id being enabled in all apps in int. - Ppant responds that only tenancy and core apps have the feature enabled currently. - Ppant mentions that they might have changes stashed for all Axiamatic services and it will take them a maximum of 2 hours to make the changes for all services. - Aganivada suggests holding off on making the changes until the branch cut is done to avoid any regression at this point. - Ppant agrees to hold off on making the changes for now.,5,[],[],NEUTRAL,[] 2023-04-04T06:35:25.207649,2023-04-04T07:34:34.588889,rsrinivasan,"{'aganivada', 'bganganna', 'rsrinivasan'}",3,"- rsrinivasan enabled the soft delete feature flag for tenants 8677, 12944, and 14037. - rsrinivasan was able to verify that the EA was loading fine and tested it on a test tenant. - There was a 404 error on the pulse manager, but it was not related to the feature flag on scoring. - The error was that no pulses were found for a specific product instance. - bganganna asked if rsrinivasan enabled the flag or excluded some instances as well. - rsrinivasan mentioned that for tenant 8677, some pulse instances were excluded: 6394, 8669, and 8668.",7,"['soft delete feature flag enabled', 'EA loading fine', 'tested on test tenant']","['404 error on pulse manager', 'no pulses found for specific product instance', 'excluded pulse instances for tenant 8677']",,[] 2023-06-26T23:16:40.854389,2023-06-26T23:44:03.343369,pjha,"{'aganivada', 'pjha', 'ppant'}",3,"- There is a concern about the SSM mapping still existing after the stack is deleted, which can cause issues from the backend. - It is suggested to delete the service-information from the SSM parameter instead of deleting the entire SSM parameter. - The previous value of the tenant's SSM parameter can be retrieved from Slack messages and restored if needed. - There is a discussion about adding a TTL date in the SSM parameter through CDK, but it is decided not to do so because it would require additional checks from the backend. - It is suggested to cleanup/update the SSM parameter via a lambda function during cleanup.",6,"['Deleting service-information from SSM parameter', ""Retrieving previous value of tenant's SSM parameter"", 'Cleanup/update SSM parameter via lambda function']","['SSM mapping still existing after stack deletion', 'Adding TTL date in SSM parameter through CDK']",NEUTRAL,[] 2023-08-29T05:41:55.384959,2023-08-29T06:13:59.037619,aganivada,"{'aganivada', 'askumar'}",2,- Aganivada is asking Askumar if they can close the tenancy HF branch. - Askumar gives permission to Aganivada to close the branch. - Aganivada informs Askumar about merge conflicts while merging with the develop branch. - Aganivada asks Askumar to cross-check if all of their changes are present. - Askumar thanks Aganivada and confirms that their changes are present in the merge commit.,4,"['closing tenancy HF branch', 'permission to close branch', 'changes present in merge commit']",['merge conflicts'],,[] 2023-03-20T07:18:58.985919,2023-03-20T23:04:05.498869,aganivada,"{'aganivada', 'askumar'}",2,"- The discussion is about trying to use spring override for transaction manager. - askumar has not found a working approach for overriding the context yet. - askumar is using qc (possibly referring to Quality Center, a test management tool).",4,[],[],NEGATIVE,[] 2023-02-06T01:49:05.028849,2023-02-07T03:37:04.304049,aganivada,"{'aganivada', 'anair', 'bganganna', 'rsrinivasan'}",4,"- There is an issue where customers have MFA enabled but vendors do not, causing vendor users to be unable to access customer product details pages. - A change was made in the auth0 rules to skip MFA check for vendor users with the ""obfuscated:read"" role in the customer org. - This issue is currently blocking Splunk from accessing EA's product details page since EA enabled MFA. - The behavior for MFA policy depends on the user's primary organization. - Once SSO is enabled, MFA settings will be handled by the IDP instead of auth0. - Testing has been done for certain use cases, such as when customer MFA is enabled and vendor MFA is not enabled. - Collaborators need to enable MFA and have ""mfa_enabled:true"" set in their user metadata to access the dashboard. - A fix for the issue has been implemented in production and verified. - If any issues related to MFA arise with the new change, the old rule can be disabled and re-enabled. - There is a suggestion to make a similar change to allow access to system_user without MFA.",18,"[""Change in auth0 rules to skip MFA check for vendor users with 'obfuscated:read' role"", 'Fix for the MFA issue implemented and verified', 'Testing for certain use cases', 'Suggestion to allow access to system_user without MFA']","['Issue with customers having MFA enabled but vendors not', ""Blocking Splunk from accessing EA's product details page"", ""Dependence of MFA policy on user's primary organization"", ""Collaborators needing to enable MFA and have 'mfa_enabled:true' set in user metadata"", 'Possibility of issues arising with the new change']",MIXED,"['PLAT-1656', 'PLAT-1656', 'PLAT-1656']" 2023-07-26T08:58:50.221609,2023-07-27T12:19:30.688529,aganivada,"{'aganivada', 'rtaraniganty'}",2,"- There is an issue with the GitLab runner instance, as builds are taking longer than usual. - The issue was raised by aganivada and cc'd to rsrinivasan. - rtaraniganty asks if the issue has been resolved. - aganivada reminds pjha to check the GitLab runner instance. - rtaraniganty asks pjha to close the loop once the issue is resolved.",4,[],"['issue with GitLab runner instance', 'builds taking longer than usual']",NEGATIVE,[] 2023-05-31T23:03:37.062969,2023-06-01T20:16:59.821809,rsrinivasan,"{'aganivada', 'rsrinivasan', 'rtaraniganty'}",3,"- The alert ""[Int] [Global] ERROR logs > 100 in 15mins"" on account Axiamatic is causing performance issues in the cluster. - The alert is using a heavy search query, so it has been disabled to prevent further performance issues. - The suggestion is to optimize the search query or create multiple alerts to improve performance. - There have been issues with logs filling up quickly in the INT environment. - Some debug logs were enabled last week for testing and were not reverted, causing a capacity issue. - The logging stopped for main instances after updating the log level, but there were still some multi-instances where the log config didn't reflect. - Additional capacity was added to INT for debugging purposes. - The team will investigate and address the capacity and logging issues.",11,"['performance issues', 'optimize search query', 'create multiple alerts', 'additional capacity added', 'investigate and address issues']","['heavy search query', 'logs filling up quickly', 'debug logs not reverted', 'log config not reflecting']",NEGATIVE,[] 2023-07-13T03:34:46.577699,2023-07-13T22:54:45.851459,akasim,"{'aganivada', 'pjha', 'rtaraniganty', 'akasim', 'askumar'}",5,"- Akasim encountered an error when trying to deploy the scoring_service in INT. - The error was related to a lack of authorization to perform the ""rds:DescribeDBClusters"" operation on the RDS cluster. - Akasim was on VPN when the error occurred. - Askumar suggested that the permission may not have been enabled for the deployment to call RDS to describe the cluster. - It was mentioned that the permission should be given at the group level. - The deployment was attempted by Askumar and it passed the describe step without any errors. - The image being deployed was not yet published, and Akasim was following up on it. - Anz needed permission to successfully deploy the image. - Anz was already part of the powerusergroup, but it is unclear why the permission issue occurred. - Pjha added RDS-Full access permission for testing purposes, and it worked. - Pjha later removed the permission. - Anz received the permission explicitly, not through the powerusergroup. - It was questioned why the deployment was succeeding for other services without the need for additional permissions. - A QC (Quality Check) was planned between Pjha and Aganivada.",24,"['successful deployment', 'permission issue resolved', 'RDS-Full access permission added', 'QC planned']","['error during deployment', 'lack of authorization', 'permission issue']",MIXED,[] 2023-05-09T22:35:58.568249,2023-05-09T22:37:10.969749,aganivada,"{'aganivada', 'rsrinivasan', 'askumar'}",3,- aganivada is suffering from a stye and is trying to reduce screen-time. - aganivada may be less responsive on Slack due to their condition. - rsrinivasan wishes aganivada to take care. - askumar advises aganivada to avoid long stretches.,3,[],"['stye', 'screen-time']",UNKNOWN,[] 2023-05-15T22:36:22.904919,2023-05-15T22:37:50.359689,ppant,"{'aganivada', 'ppant'}",2,- The person named ppant is redeploying the lambda from CDK V2 to fix an issue. - Aganivada suggests that it should already be CDKv2.,2,"['redeploying lambda', 'fixing issue']",['CDKv2'],,[] 2023-02-20T01:23:54.950599,2023-02-20T01:43:33.882889,askumar,"{'aganivada', 'askumar', 'ppant'}",3,- askumar is requesting a review of an approach to update/fetch the stage of epoch execution. - ppant is unable to open the link provided by askumar. - askumar adds cc rsrinivasan and grants access to ppant. - aganivada suggests sharing the information in the design channel and tagging tls for review. - askumar agrees and shares the information in the design channel. - aganivada adds some comments to the discussion.,7,"['review approach', 'grant access', 'share information']","['unable to open link', 'add comments']",,[] 2023-03-21T05:15:16.716529,2023-03-23T01:19:11.287689,pjha,"{'aganivada', 'pjha'}",2,- The discussion is about a confluence design document for deploying multiple instances of the same service. - Pjha shares the link to the design doc with the team. - Aganivada suggests posting the design doc in the design-review channel. - Pjha agrees to post it in the design-review channel.,3,"['confluence design document', 'deploying multiple instances', 'same service']",[],,[] 2023-06-08T11:34:15.904479,2023-06-08T21:05:09.149369,rsrinivasan,"{'aganivada', 'gshenoy', 'rsrinivasan', 'rtaraniganty'}",4,"- The fields under ""mdc"" in the integration account are getting dropped and not indexed in Logz. - The field ""mdc.tenantId"" is mapped as a long but was sent with a null value, causing the issue. - The errors are related to the new feature ""UserAttributeDiscoveryPublisher.java"" in the discovery service. - The delegate had the annotation ""EnableMDCPublicMethods"" but the ""tenantId"" field is optional, so the annotation was removed. - Logz has added a rule to map the ""tenantId"" field to ""tenantId_str"" if it comes as a string and to ""tenantId"" if it comes as an int. - However, even if it is a valid number but comes as a string, it is still mapped to the ""_str"" field. - This may break some filters and alerts, so an alert can be set to indicate when the ""_str"" field is present. - Once the fixes from the discovery team are deployed, the rule can be reverted. - A bug will be added to fix the null check in the common lib for mdc.",17,"['Fixing field mapping issue in Logz', 'Fixing null value issue in mdc.tenantId', 'Fixing errors in UserAttributeDiscoveryPublisher.java', 'Removing unnecessary annotation', 'Adding rule in Logz for field mapping', 'Setting alert for presence of _str field', 'Deploying fixes from discovery team', 'Adding bug to fix null check in common lib for mdc']",[],,['PLAT-2173'] 2023-04-18T21:08:54.429929,2023-04-24T19:35:37.164139,aganivada,"{'aganivada', 'anair', 'rvaidya'}",3,"- The user roles of ""Product Admin"" and ""Account Admin"" need to be renamed according to the specified links. - The proposed new name for ""Product Admin"" is ""Vendor Admin"". - There is a discussion about whether to differentiate between ""Product Admin"" and ""Account Admin"" or keep their privileges similar. - A migration script will be needed to update existing product/account admins to the new ""Local Admin"" role. - Open invitations with the old roles will need to be re-invited as ""Local Admin"". - Once the migration script is run, the role changes cannot be reverted.",3,"['Rename user roles', 'Differentiate between Product Admin and Account Admin', 'Migration script for role update', 'Re-invite open invitations', 'Irreversibility of role changes']",[],,['PLAT-1876'] 2023-03-01T06:45:12.380929,2023-03-02T08:59:23.911449,rsrinivasan,"{'aganivada', 'svummidi', 'rsrinivasan', 'rtaraniganty'}",4,"- Rsrinivasan has generated an excel file containing topics extracted from open text comments for AWS Comprehend, OpenAI Curie, and OpenAI Davinci. - Curie seems to have added its own interpretation to the term ""on-call"" from the medical domain. - Rsrinivasan shared the link to the excel file: - Aganivada observed that AWS topics are empty in many cases and the average topics don't convey important information. Curie also seems to extract the entire string in some cases when there are bullet points. - Rsrinivasan mentioned that OpenAI has released gpt-3.5-turbo, which costs the same as Curie and 1/10th the price of Davinci. The keyword extraction with the new model is looking better, and sentiment calculation now includes a question for better context. - Svummidi mentioned the need to work with Mnirmal to include questions in the pipeline for better context. - Rsrinivasan shared a script repository for prompts and mentioned that he will check with Mnirmal on questions. - Svummidi pointed out that there is no concept of session in the current setup and suggested tracking the tokens consumed count for better comparison between models and prompts. - Rsrinivasan agreed to update the script to record tokens for comparison.",12,"['Excel file with topics extracted from open text comments', 'OpenAI released gpt-3.5-turbo', 'Keyword extraction with gpt-3.5-turbo', 'Sentiment calculation improvement', 'Script repository for prompts', 'Tracking tokens consumed count']","['AWS topics are empty', ""Average topics don't convey important information"", ""Curie's interpretation of 'on-call' from medical domain"", 'Curie extracts entire string in some cases with bullet points', 'No concept of session in current setup']",NEUTRAL,[] 2023-05-29T21:13:15.607239,2023-05-29T22:54:53.342369,rtaraniganty,"{'aganivada', 'pjha', 'rtaraniganty'}",3,"- rtaraniganty is adding default permissions to ECS tasks to add SSM params. - pjha approves of rtaraniganty's changes. - aganivada suggests adding an additional prefix to the SSM parameters for easier tracking. - rtaraniganty disagrees and believes it is fine to let all services in an environment have read access to parameters in that environment. - rtaraniganty mentions that they have been setting SSM parameters in the environment during provisioning, but it is also useful to let the service read what it needs directly. - aganivada mentions that there may be cases where explicit permissions were added for reading SSM parameters from code.",9,"['default permissions', 'approving changes']","['disagreement on prefix', 'explicit permissions']",,[] 2023-05-22T02:02:23.559989,2023-05-22T02:44:16.935749,sranjan,"{'aganivada', 'rsrinivasan', 'sranjan'}",3,"- The team is experiencing an issue with parsing a specific rule in the Jira container, resulting in a log message stating ""Failed to parse given rule 17207::c::17207:pin:21772::rw . so skipping the rule"" in *STAGE*. - The team suspects that the issue may be related to an outdated version of the platform libs common (axm-commons.version 0.0.10) in the pom file. - It is suggested to update the platform libs common to the latest version. - The error may be harmless unless the team is using rbac in the Jira service. - The flooding of logs due to this error prompted the team to raise the issue. - Enabling a flag to use a new permission model may be the cause of this issue, but it should only happen for UI calls. - A tracker ticket has been created to address the issue.",11,"['parsing issue', 'updating platform libs common', 'raising the issue', 'creating tracker ticket']","['outdated version of platform libs common', 'flood of logs', 'possible cause: enabling flag for new permission model']",NEGATIVE,[] 2023-01-31T20:09:01.007139,2023-02-01T04:32:56.443719,aganivada,"{'aganivada', 'askumar', 'rvaidya'}",3,"- There are several bugs that need to be addressed in version 0.9.7. - PLAT-1642 is a bug related to permissions not being verified properly for the updateProductInstance function. - PLAT-1641 is a bug related to permissions not being set during the onboarding flow. - PLAT-1640 is a bug related to an error message that needs to be updated. - PLAT-1638 is a bug that requires more information, as the user was prompted for MFA during login. - PLAT-1636 and PLAT-1637 are bugs that require both UI and backend changes. - PLAT-1641 has been closed after discussion. - The update API in PLAT-1642 has a condition that limits the update only for orgAdmin users. - PLAT-1642 is not a high priority bug and won't be hotfixed, but it will be fixed in the develop branch. - The fix for PLAT-1642 involves changing the condition from AND to OR. - The same OR condition is present in the deleteProduct function as well.",7,"['bug fixes', 'permissions verification', 'onboarding flow', 'error message update', 'MFA during login', 'UI changes', 'backend changes', 'PLAT-1641 closed', 'update API', 'orgAdmin users', 'fix for PLAT-1642', 'deleteProduct function']","['bugs', 'PLAT-1642 not high priority', 'not hotfixed', 'condition change from AND to OR']",NEGATIVE,"['PLAT-1642', 'PLAT-1641', 'PLAT-1640', 'PLAT-1638', 'PLAT-1638', 'PLAT-1636', 'PLAT-1641', 'PLAT-1642', 'PLAT-1642']" 2023-04-24T04:36:26.585829,2023-04-24T21:28:01.498779,aganivada,"{'aganivada', 'rtaraniganty'}",2,"- The team is discussing migrating their current RDS instances to Aurora Serverless v2 in integration. - The current setup consists of 3 RDS instances, and the team estimates the cost to be around USD 81 per instance type. - With Aurora Serverless v2, the estimated cost would be USD 131 with a proxy and USD 43 without a proxy. - The team is considering two options: migrating all 3 RDS instances to 2 Aurora Serverless instances or migrating only the core and discovery instances to Aurora. - Option 2 might reduce costs and downtime but could introduce inconsistencies with some services using Aurora and others using regular PostgreSQL. - The team agrees that using Aurora for all instances would mirror what they are doing in production. - They do not plan to use a proxy in production but might try it in integration first and introduce it in production if it works well. - The same configuration (Aurora for all instances) will be used in the staging environment as well.",6,"['migrating to Aurora Serverless v2', 'reducing costs', 'reducing downtime', 'using Aurora for all instances', 'using a proxy in integration']","['introducing inconsistencies', 'using regular PostgreSQL', 'using a proxy in production']",UNKNOWN,[] 2023-03-19T23:03:24.788369,2023-03-23T23:12:15.153689,rvaidya,"{'aganivada', 'rvaidya', 'ppant', 'rsrinivasan', 'svummidi', 'snangia'}",6,"- There are excessive logs being generated in the INT environment. - The trace logs are enabled for tenantid 19190 and need to be disabled. - The log configuration files for the scheduler service in both INT and stage environments do not mention any specific tenants. - The ingestion service is producing a lot of debug logs. - The sentiment collector has the maximum number of logs in the INT environment. - The log level for the ingestion service needs to be reverted to info. - There is an issue with the auto refresh of log level changes via cloud config. - There was a similar issue with core data in the past, which is a known bug with Log4j2. - The SC (sentiment collector) logging level needs to be updated back to info and the service may need to be bounced in the INT environment.",20,"['log configuration files', 'ingestion service', 'sentiment collector', 'auto refresh of log level changes']","['excessive logs', 'trace logs enabled for tenantid 19190', 'debug logs in ingestion service', 'maximum number of logs in sentiment collector', 'log level for ingestion service', 'issue with auto refresh of log level changes', 'known bug with Log4j2', 'SC logging level needs to be updated', 'service bounce in INT environment']",NEGATIVE,[] 2023-08-24T22:54:23.174719,2023-08-24T23:09:11.976179,aganivada,"{'aganivada', 'pjha'}",2,"- The maximum ACU in the highlighted instances is 5 and 3, which should be sufficient. - The minimum ACU during the first upgrade attempt was 0.5 and the maximum ACU was 1. - The available memory fluctuates between 1 GiB and 2 GiB based on the scaling of ACUs on the cluster. - Increasing the minimum ACU is suggested to be on the safer side.",3,"['maximum ACU', 'sufficient', 'available memory', 'scaling of ACUs', 'increasing minimum ACU', 'safer side']","['minimum ACU', 'upgrade attempt']",,[] 2023-03-15T00:28:14.433639,2023-03-15T00:50:16.185109,rtaraniganty,"{'rtaraniganty', 'sbhosale'}",2,"- There was an exception encountered during context initialization related to a bean creation error. - The error was caused by a validation failure in the Liquibase configuration. - The specific change set that failed validation was related to the persona table. - The issue was caused by accidentally including the unique constraint as part of another merge request. - The suggested solution was to drop the persona table and remove the changelog, or add a different change set to modify the table. - The issue was resolved by updating the persona table to match the changeset in the develop branch. - It was agreed that another changeset would be added if needed to further modify the table.",7,"['exception encountered', 'context initialization', 'bean creation error', 'validation failure', 'Liquibase configuration', 'specific change set', 'persona table', 'suggested solution', 'issue resolved', 'develop branch', 'another changeset']","['accidental inclusion', 'unique constraint']",POSITIVE,"['DISC-2609', 'DISC-2609']" 2023-05-03T12:40:21.777179,2023-05-04T04:55:49.855949,rtaraniganty,"{'rsrinivasan', 'rtaraniganty'}",2,- The current platform sprint is the 100th platform sprint. - The team is excited about this milestone.,2,[],[],,[] 2023-07-14T12:18:35.020609,2023-07-16T23:19:59.588679,rtaraniganty,"{'rsrinivasan', 'rtaraniganty', 'rvaidya', 'sfarooqh'}",4,"- The notification service uses SQS for sending emails to multiple recipients in bulk. - The use of @Async in the REST -> SQS transition allows for faster response times, but there is a risk of message loss if there is a node crash before the messages are posted to SQS. - There is a tracking table in the Postgres database that maintains the status of each email (created, published, failed) before it is posted to SQS. - There is a backlog for cleaning up old records in the tracking table. - The tracking table is in append mode and does not update the state of the emails. - There is a plan to add additional states to the tracking table, such as ""SENT"" and ""Email Clicked"", by subscribing to AWS SES. - There is a tradeoff between bulk send and fast response, and it is important to handle potential failures to avoid lost emails. - There is a page that tracks completed and pending features related to the notification service. - There is a suggestion to allow the caller to point to an S3 folder for bulk email sends. - The maximum number of users sent to at a time is currently limited to 10, but further investigation is needed to determine the actual maximum possible number.",29,"['notification service', 'SQS', '@Async', 'faster response times', 'tracking table', 'Postgres database', 'additional states', 'AWS SES', 'completed features', 'pending features', 'S3 folder', 'maximum number of users']","['message loss', 'node crash', 'backlog', 'append mode', 'potential failures', 'lost emails', 'limited number of users']",NEUTRAL,['N-1'] 2023-05-24T09:11:17.070379,2023-05-24T10:50:00.107489,rtaraniganty,"{'araman', 'rtaraniganty', 'hchintamreddy'}",3,"- The discussion is about the availability of a ""previous"" or back button in a Slack bot. - The user rtaraniganty asks if there is a previous page option. - The user hchintamreddy mentions that there was an issue with follow-up questions causing dynamic navigation, but mnirmal has changed the behavior to show follow-ups on the same page. - hchintamreddy suggests reviewing the possibility of introducing a back button in teams. - rtaraniganty asks if there were technical challenges with implementing a back button. - hchintamreddy confirms that the original design had a back button, but it was removed due to complexity caused by follow-up questions. - rtaraniganty acknowledges the challenges but expresses concern that users will find it challenging without a back button.",8,[],"['availability of previous/back button', 'issue with follow-up questions', 'complexity caused by follow-up questions', 'challenges without a back button']",,[] 2023-04-24T22:25:18.581139,2023-04-25T00:06:03.245179,pjha,"{'aganivada', 'pjha'}",2,- There is a change that needs to be merged and deployed. - There is a suggestion to try deploying the tenancy with the change instead of destroying and deploying. - The change in the service-id requires deleting it. - There is a discussion about creating a merge request for 'issue/shortning_service_id' to 'release/0.9.10'. - The impact of 'shortning_service_id' is only on the loadbalancer - route53 integration. - It is agreed to merge 'shortning_service_id' to 'release/0.9.10' which will take it to develop and main.,9,"['change needs to be merged and deployed', 'suggestion to try deploying the tenancy with the change', ""creating a merge request for 'issue/shortning_service_id' to 'release/0.9.10'"", ""agreed to merge 'shortning_service_id' to 'release/0.9.10'""]","['change in the service-id requires deleting it', ""impact of 'shortning_service_id' only on the loadbalancer - route53 integration""]",,[] 2023-05-29T05:59:34.355399,2023-05-30T08:50:52.056859,pjha,"{'aganivada', 'pjha', 'rtaraniganty', 'askumar'}",4,"- The team is discussing the upgrade of the RDS-Aurora-Postgres engine version to 15.2. - AWS recommends the second strategy with some checks prior to upgrading the engine version. - Restoring the snapshot does not automatically revert the version. - The team discusses two approaches for the upgrade: Approach-1 involves cloning a recent snapshot, upgrading the snapshot, connecting core-data to the new instance, dropping the old instance, and updating all services to point to the new instance. Approach-2 involves cloning a recent snapshot, upgrading the active instance with planned downtime, updating mappings if there are any issues, and dropping the new instance if no issues are found. - The team is checking for regression issues and plans to merge the CDK changes by the end of the day. - The RDS core instance was started for testing purposes, and it can be restarted if needed. - Precheck commands were executed on the clone created with a snapshot, and the expected responses were received, indicating that the prerequisites are met. - For integration, the team plans to use Approach-1, while for stage and prod, they plan to use Approach-2. - The RDS instance has been upgraded, and the core-data service has been deployed to the upgraded Postgres RDS. - The team is discussing updating the information in the engineering channel thread.",25,"['upgrade of RDS-Aurora-Postgres engine version', 'approach-1', 'approach-2', 'checking for regression issues', 'CDK changes', 'precheck commands', 'integration', 'stage and prod', 'RDS instance upgrade', 'core-data service deployment', 'updating information in engineering channel thread']","['restoring snapshot', 'dropping old instance', 'updating mappings', 'regression issues', 'restarting RDS core instance']",,[] 2023-06-02T01:17:01.105649,2023-06-02T03:35:47.136769,askumar,"{'aganivada', 'askumar'}",2,"- The team is discussing an issue with tests failing when adding the @transactional annotation to Delegate methods. - The failures occur because the transaction is not committed in case exceptions are thrown, resulting in the API being unable to provide a response. - Rtaranganty had previously highlighted this issue. - Askumar suggests adding an interceptor to close the transaction in failure cases. - Askumar also mentions trying to keep the annotation at one place.",3,"['Adding interceptor to close transaction in failure cases', 'Keeping annotation at one place']",['Tests failing when adding @transactional annotation to Delegate methods'],UNKNOWN,[] 2023-05-22T02:45:56.014699,2023-05-22T03:51:21.764999,mnirmal,"{'aganivada', 'mnirmal'}",2,- Mnirmal is requesting a snapshot of the stage DB before deployment to debug a PMS issue. - Aganivada agrees to provide the snapshot but mentions that it will take longer as they need to create a DB out of the snapshot to extract the data. - Aganivada asks Mnirmal to try some files from a specific location and requests feedback on whether it works. - Aganivada informs the team (tagged as @plat) that they have added a DB instance in the stage for this request and will remove it once they receive confirmation from collab.,7,"['snapshot request', 'DB creation', 'DB instance added']","['PMS issue', 'longer time']",,[] 2023-04-12T16:19:51.089739,2023-04-12T21:20:13.795709,rtaraniganty,"{'aganivada', 'askumar', 'rtaraniganty'}",3,"- The team is discussing the idea of generating a script to generate events of different types, users, and tenants to feed into Snowflake. - They want to see if they can get the queries in place to support the PM asks. - They plan to connect the script with Retool to build the required charts. - They want to make progress in this sprint, focusing on ensuring that the toolchain will provide what they need. - askumar has worked on building a tool to dump data in Grafana on the Snowflake instance, and they may extend it to publish events. - askumar has inserted data via Snowflake's UI and dumped it via the Snowflake dashboard. - They plan to try the same approach for notifications as well.",4,"['generating script', 'supporting PM asks', 'connecting with Retool', 'making progress in sprint', 'dumping data in Grafana', 'inserting data via Snowflake UI', 'dumping data via Snowflake dashboard', 'trying same approach for notifications']",[],,[] 2023-05-02T09:22:24.602179,2023-05-03T00:01:20.026789,aganivada,"{'aganivada', 'askumar'}",2,"- The team is working on version 0.9.11 and has a planning sheet for it. - They have identified certain tasks that need to be completed in sprint 3, which are marked in bold. - Pjha and askumar are responsible for updating tasks related to the database. - Askumar has updated two tasks related to database read/write operations: exploring pg_dump approach for data replication and using CDK to populate cluster name and variables for all services.",3,"['working on version 0.9.11', 'planning sheet', 'identified tasks', 'sprint 3', 'updating tasks', 'database read/write operations', 'exploring pg_dump approach', 'using CDK', 'populate cluster name and variables']",[],,[] 2023-05-22T04:44:27.194579,2023-05-22T05:40:53.302399,rsrinivasan,"{'aganivada', 'rsrinivasan', 'mnirmal', 'ppant'}",4,- rsrinivasan is experiencing a 403 forbidden error for the notification service in int for the swagger endpoint. - aganivada suggests trying from VPN. - ppant confirms that it works with VPN for them. - mnirmal tries from VPN and confirms that it works.,5,['VPN'],"['403 forbidden error', 'notification service', 'swagger endpoint']",,[] 2023-03-09T10:49:19.239149,2023-03-09T21:18:48.759719,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,"- The release/0.9.8 branch has not been closed properly and the manifest files are not in sync with the develop branch. - There are mismatched versions of services in the environment. - The git-flow plugin did not delete the release/0.9.8 branch properly, possibly due to it being protected. - Recent deployments to the stage environment were done from the main branch, indicating that the main branch may have a higher version. - The git-flow plugin is not deleting the remote branch, only the local branch. - It may be necessary to manually delete the remote branch using the command ""git push origin --delete release/0.9.8"". - Going forward, a HF CDK branch will be created immediately after merging the release branch to avoid confusion. The HF branch will be used for hotfix deployments and merged to the main branch when deploying to production.",14,"['HF CDK branch created immediately after merging release branch', 'Hotfix deployments', 'Merging HF branch to main branch for production deployment']","['Release/0.9.8 branch not closed properly', 'Manifest files not in sync with develop branch', 'Mismatched versions of services in environment', 'Git-flow plugin not deleting release/0.9.8 branch properly', 'Remote branch not deleted by git-flow plugin', 'Manual deletion of remote branch required', 'Recent deployments to stage environment from main branch']",,[] 2023-06-26T09:04:19.476239,2023-06-26T11:51:48.932359,rtaraniganty,"{'pjha', 'rtaraniganty'}",2,"- The tasks for the GL runner have been completed, including alarms, sizing up and down during releases, and adding action required based on alerts in the Run Book. - There is no specific Confluence page for scaling up and down, but the commands have been added to the run book page. - A separate Confluence page will be created for scaling up and down. - To avoid duplicates, a page will be created first and then linked in the runbook. - The commands for scaling up and down have been added to the 'How To' page for GitLab Runner.",8,"['tasks completed', 'alarms', 'sizing up and down', 'action required', 'commands added to run book', 'separate Confluence page', 'avoid duplicates', 'linked in runbook', ""commands added to 'How To' page""]",[],,[] 2023-01-30T22:29:05.464339,2023-01-31T06:05:19.695039,rtaraniganty,"{'akasim', 'rtaraniganty'}",2,"- The logs for certain days are missing in the logz.io UI. - This is an unusual and unexpected state. - The missing logs are causing issues for debugging and affecting development. - The retention period for logs in the staging environment was set to 7 days, which caused the logs to be deleted. - The retention period has been fixed to 2 weeks for staging and integration environments, and 30 days for production.",16,"['Retention period fixed', 'Staging environment logs retention increased', 'Integration environment logs retention increased', 'Production environment logs retention increased']","['Missing logs in logz.io UI', 'Unusual and unexpected state', 'Issues for debugging', 'Affecting development']",NEGATIVE,[] 2023-04-06T19:51:54.112049,2023-04-06T22:33:27.765819,rtaraniganty,"{'aganivada', 'svummidi', 'anair', 'rtaraniganty'}",4,"- rtaraniganty sent an email from Drata to invite svummidi and aganivada to complete the first round of assessment. - The instructions provided were accurate and svummidi was able to complete the process successfully. - There is a suggestion to create an Apple ID with a company email account instead of a personal account for FileVault. - It took approximately 15-20 minutes for aganivada to complete the assessment. - anair completed the assessment in about 10 minutes, except for the background check which would be uploaded by their organization.",12,"['accurate instructions', 'successful completion', 'quick completion']",['suggestion for Apple ID'],,[] 2023-03-01T04:02:54.740919,2023-03-02T03:40:14.120689,ppant,"{'aganivada', 'rsrinivasan', 'ppant'}",3,"- There is an issue with the JPA query specification when using distinct=true and sort filters together. - When distinct=true and sort filters are applied, multiple queries are executed and one of the intermediate queries results in an error. - The error occurs because the ORDER BY expressions must appear in the select list when using SELECT DISTINCT. - This issue is related to a known issue in JPA. - One possible solution is to use a visibility library to build raw queries that avoid the double left join. - Another option is to add methods with native queries as a worst-case scenario. - The current issue is not blocking apps, but it can cause a 500 error if someone uses the affected API. - The API response for getCustomers will be distinct, so it is not a problem for the current implementation. - The curl command provided in the discussion fails in this case. - There is a suggestion to restrict the sort filter to only root table column id for immediate requirements. - The distinct=true parameter is usually sent by default in app calls, so the problem can occur in other use cases. - It is suggested to check with apps to see what sort clauses they are planning to use.",16,"['JPA query specification', 'visibility library', 'getCustomers API response']","['error', 'known issue', '500 error', 'curl command failure', 'sort filter restriction']",NEUTRAL,[] 2023-08-09T22:58:54.784389,2023-08-10T03:49:17.318839,aganivada,"{'aganivada', 'akasim', 'hchintamreddy'}",3,- Subscription renewal has started again in MB. - Brokering is now working. - The release thread needs to be updated with these findings. - A filter for all Event bridge invocation failures has been provided. - The provided link leads to metrics related to failed invocations.,8,"['Subscription renewal has started again in MB', 'Brokering is now working', 'A filter for all Event bridge invocation failures has been provided']","['The release thread needs to be updated with these findings', 'The provided link leads to metrics related to failed invocations']",,[] 2023-05-24T22:42:49.283339,2023-05-24T23:00:41.923049,vtiwari,"{'aganivada', 'vtiwari', 'ppant'}",3,"- The user vtiwari was unable to access certain APIs due to a lack of available licenses in the workspace. - ppant suggests reopening the browser and trying again, as this has worked in the past. - aganivada asks vtiwari if it worked on refresh, but vtiwari confirms that it did not. - aganivada adds more licenses to the workspace and suggests vtiwari try again. - ppant jokes that getting additional licenses is not easy. - ppant mentions that a similar issue happened with Mohith a couple of days ago, but it eventually worked after trying again. - aganivada speculates that they may have added licenses at the same time Mohith was trying. - vtiwari confirms that it worked after aganivada added more licenses and thanks them.",10,"['API access issue', 'Adding licenses']",['Lack of available licenses'],POSITIVE,[] 2023-05-16T02:22:27.276629,2023-05-16T02:30:22.983459,ppant,"{'rsrinivasan', 'ppant'}",2,"- The pipeline for the apps-workflow-lambda is failing at the create-pypi stage because a wheel with the same package name already exists. - It is suggested to trigger the deletion of the existing wheel from the pipeline instead of manually deleting it every time. - Since the lambda package is not imported anywhere, it is proposed to remove the stage from the pipeline altogether. - Typically, for lambda functions, they are not published to PyPI. - The variable PUBLISH_PYPI is set to 1 if the branch is protected, causing the stage to always be executed in develop branches. - For lambda functions, the variable PUBLISH_PYPI is not set. - The variable PUBLISH_PYPI can be explicitly set to zero for apps lambda. - The person will make the necessary changes based on the discussion.",11,"['trigger deletion of existing wheel', 'remove stage from pipeline', 'make necessary changes']","['pipeline failure', 'existing wheel with same package name', 'manually deleting wheel', 'lambda package not imported', 'PUBLISH_PYPI variable', 'always executed in develop branches']",NEGATIVE,[] 2023-06-23T10:00:40.871109,2023-06-26T09:03:34.081809,pjha,"{'aganivada', 'pjha', 'rtaraniganty'}",3,- A policy has been created to restrict CLI and console access to the IAM user if the user is not connected to the VPN. - The policy is being considered to be added to the AxmPowerUserGroup in INT. - The implementation and testing of the policy should be done during IST hours to minimize impact. - It is important to document the changes and provide instructions for rollback if needed. - The impact on GitLab tasks should be checked after implementing the policy. - The team acknowledges that they missed considering GitLab testing during low impact IST hours.,3,"['Policy created to restrict CLI and console access to IAM user', 'Policy being considered for AxmPowerUserGroup in INT', 'Implementation and testing during IST hours', 'Documentation and rollback instructions', 'Checking impact on GitLab tasks']",['Missed considering GitLab testing during low impact IST hours'],,[] 2023-03-07T04:46:11.898769,2023-03-13T21:15:57.149839,aganivada,"{'aganivada', 'svummidi', 'ppant'}",3,"- Aganivada is asking someone to connect to VPN and check if they can access a specific URL. - Aganivada has provisioned a route53 resolver in int and updated private DNS settings in perimeter 81. - Once on VPN, they should be able to access services directly with a private domain name without making any additional changes to DNS settings in Mac. - Ppant confirms that they are able to access the URL and asks if it is an HTTP endpoint. - Svummidi mentions that Akalyta is not able to access the tenancy service and asks if it is related to the thread. - Aganivada apologizes and mentions that they are working on the VPN issue. - Aganivada assures Svummidi that the service should be accessible now. - Aganivada mentions that they have shared details with Akalyta privately.",8,"['VPN connection', 'URL access', 'Route53 resolver', 'Private DNS settings', 'Accessing services with private domain name']","['VPN issue', 'Tenancy service access']",,[] 2023-02-28T03:01:51.906139,2023-02-28T03:26:54.144969,aganivada,"{'aganivada', 'snangia'}",2,- The team is working on automating the publishing of a bot to an S3 bucket. - They are planning to push images/bot icons from GitLab to the S3 bucket as a zipped file. - It is unclear if this process is applicable only for team images or if it also includes Slack. - The team still needs to check if they will be using S3 for Slack.,4,"['automating publishing of bot', 'pushing images/bot icons from GitLab to S3 bucket', 'using S3 for team images']","['unclear process for Slack', 'need to check if S3 will be used for Slack']",,[] 2023-04-06T01:40:23.709189,2023-04-06T07:29:05.395059,askumar,"{'rsrinivasan', 'askumar'}",2,- The discussion is about integrating and testing Aurora. - The reference document for Aurora integration and testing is provided. - Participants are encouraged to add comments and questions as the discussion progresses. - Rsrinivasan has shared a link with his questions and asks for a comparison between their own approach and a library. - Rsrinivasan also shares a link on how to use the library for Spring Cloud AWS RDS.,3,"['integrating Aurora', 'testing Aurora', 'reference document provided', 'participants encouraged to add comments and questions', 'link shared for comparison', 'link shared on how to use library for Spring Cloud AWS RDS']",[],,[] 2023-04-23T23:05:18.065599,2023-04-23T23:27:43.490469,akalyta,"{'akalyta', 'aganivada'}",2,"- Akalyta is requesting Aganivada to raise the hard limit for OpenAI to avoid interruptions. - Akalyta suggests raising the hard limit to $180. - Aganivada agrees to raise the hard limit. - Aganivada also mentions that they have requested to raise the account limit to $250, indicating the possibility of going even higher if it makes sense.",4,"['raising hard limit', 'raising account limit']",[],,[] 2023-06-19T04:19:56.532659,2023-06-21T06:37:47.799119,aganivada,"{'aganivada', 'rtaraniganty'}",2,"- The team is discussing an issue with using regex on a text(analyzed) field in Logz. - Logz has suggested reverting the field type to keyword(string) or using a regex expression that includes both lower and upper case. - There is a bug (bug-id 61838) that affects regex on analyzed fields, and Logz will reach out once it is resolved. - The team decides to update the field type to keyword in the staging and production environments for now. - Logz also proposed using the "".raw"" extension for regex, but the team decides against it. - The field mapping is updated in the staging and production environments, and regex seems to be working.",7,"['regex issue', 'reverting field type to keyword', 'updating field type to keyword', 'field mapping updated', 'regex working']","['bug affecting regex on analyzed fields', 'not using .raw extension for regex']",POSITIVE,[] 2023-05-02T09:37:21.298179,2023-05-02T22:32:26.340579,pjha,"{'pjha', 'rtaraniganty'}",2,"- The discussion is about creating an RDS Aurora Serverless_v2 instance using CDK_v2. - There is an issue where the cluster is created but read instances cannot be created. - The issue is not fully supported and there is a GitHub pull request related to it. - The user ""rtaraniganty"" is trying to understand the problem and asks for clarification. - The user ""pjha"" confirms that they couldn't create a cluster with read and write instances using CDK. - ""rtaraniganty"" provides a link to a GitHub issue related to the problem.",5,[],[],,[] 2023-06-18T22:03:10.549739,2023-06-19T09:59:45.078669,mnirmal,"{'aganivada', 'rtaraniganty', 'mnirmal'}",3,"- mnirmal is encountering an error while trying to deploy a stack called ""sns_events_publisher_stack.py collab-pulse-notification-events-sqs"" from the release/0.9.12 branch. - aganivada suggests running the ""./build-package.sh"" script in the ""../es-cluster-initial-setup/"" and ""../ip_authorizer/"" directories before deploying. - mnirmal asks if the deployment can be done on the stage environment, to which aganivada agrees. - mnirmal confirms that the changes have been pushed to the release branch. - aganivada deploys the stack on the stage environment. - mnirmal expresses gratitude to aganivada. - aganivada asks mnirmal to update the deployment dependency page. - mnirmal apologizes for forgetting to update the page and mentions that ""sns-events-publisher"" is already part of the core-stacks deployment dependencies, so no extra steps are needed. - aganivada questions if the ""collab-pulse-notification-events-sqs"" also needs to be deployed for adding a new event. - mnirmal is unsure and asks pkarthikeyan for confirmation.",23,"['deployment', 'error', 'script', 'stage environment', 'changes pushed', 'gratitude', 'core-stacks', 'deployment dependencies', 'new event']","['unknown', 'error']",,[] 2023-05-10T13:09:51.715059,2023-05-10T20:29:30.021509,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,"- The person named rtaraniganty is asking if they were planning on allowing people to connect to RDS instances in an environment after connecting to the VPN, as it is currently not working. - The person named aganivada acknowledges the issue and suggests that they need to add P81 SG to RDS access as part of VPN CDK changes.",2,"['RDS instances', 'VPN', 'connectivity']","['not working', 'issue']",,[] 2023-08-23T09:01:19.254729,2023-08-24T22:02:38.849999,pjha,"{'pjha', 'rtaraniganty'}",2,"- The discussion is about reviewing a weekly report analysis using CUR for stage c.c aganivada. - The cost graph excludes bars when the cost is less than $2 in a week. - The raw data is uploaded to S3 by CUR. - The AWS CLI command to get the S3 data is provided. - Athena is used to query the data instead of directly querying the file. - CUR provides a cloudformation template that creates Lambda Function, Athena, and AWS Glue Database. - The Lambda Function is triggered when a new event occurs in the S3 bucket and updates the Glue. - A Confluence page is requested to be created with the flow details and sample queries.",8,"['Reviewing weekly report analysis', 'Using CUR for stage c.c aganivada', 'Cost graph customization', 'Uploading raw data to S3', 'Using AWS CLI to get S3 data', 'Using Athena to query data', 'Cloudformation template for CUR', 'Lambda Function for Glue update', 'Creating Confluence page']",[],,[] 2023-04-12T11:45:32.393469,2023-04-24T10:16:31.951749,akalyta,"{'akalyta', 'svummidi', 'psomasamudram563'}",3,"- The discussion is about parsing G2 products data and making changes to the parsed files to ensure they interact well with scripts that use the data. - The suggested changes include uploading files in .csv format, uploading files directly to the `rawdata` folder, ensuring lowercase filenames with dashes separating words, and naming columns in the data file as the fields in G2. - There is a discussion about the format of the data files, with a suggestion to convert them to CSV before uploading to S3. - The team is collecting data from different sources and wants to avoid making it specific to one site. - Two CSV files have been uploaded to the `rawdata` folder for Splunk Enterprise and other Splunk products. - The team is exploring the possibility of creating products and linking the data using their chat interface. - There is a mention of manually populating indexes due to low data volume, but there is a plan to add automation if necessary. - The team discusses the need for a script with tenant ID and product instance ID to create product instances in the tenant. - Additional CSV files for CrowdStrike and Wiz have been uploaded to S3.",13,"['parsing G2 products data', 'making changes to parsed files', 'uploading files in .csv format', 'uploading files to rawdata folder', 'naming columns in data file', 'collecting data from different sources', 'exploring creating products and linking data', 'manually populating indexes', 'creating product instances in tenant']","['format of data files', 'making data specific to one site', 'low data volume', 'lack of automation']",,[] 2023-02-14T23:09:31.982239,2023-02-14T23:26:05.834519,aganivada,"{'aganivada', 'ppant'}",2,- Aganivada suggests using a status page tracking solution from Better Uptime. - Ppant agrees to evaluate the options and suggests tracking them in a confluence page. - Aganivada suggests waiting for confirmation from Aryan on specific requirements before proceeding.,3,['status page tracking solution from Better Uptime'],[],,[] 2023-05-15T23:04:03.677469,2023-05-16T01:35:07.209189,aganivada,"{'aganivada', 'pjha', 'askumar'}",3,"- Aganivada is asking Askumar to review the changes related to Aurora that were merged into the develop branch. - Askumar confirms that all the changes are present. - Aganivada mentions that once the three stacks are updated, they can merge cdkv2 to develop. - Pjha confirms that they are working on the last stack. - Pjha informs Aganivada that their changes have been merged into the feature/PLAT-792 branch. - Aganivada thanks Pjha for the update.",6,"['review changes', 'changes present', 'merge cdkv2', 'working on last stack', 'changes merged', 'update']",[],,['PLAT-792'] 2023-07-28T17:18:50.189259,2023-08-04T00:17:53.299259,svummidi,"{'aganivada', 'pjha', 'rsrinivasan', 'svummidi', 'rtaraniganty'}",5,"- The team is experiencing an error when trying to access secrets manager from an EC2 instance. - The error is related to an access denied exception for the user ""arn:aws:iam::433798924509:user/akalyta"". - It is suspected that the error is due to VPN access and the user-specific identity-based policy. - The team suggests switching to role-based access and running the service within the VPC. - It is decided to provision a new EC2 instance in the integration VPC to resolve the issue. - The team requests assistance from pjha to create the new instance and provide SSH public keys. - pjha agrees to prioritize the task and creates the EC2 instance with S3 and Secret access.",26,"['error accessing secrets manager', 'access denied exception', 'switching to role-based access', 'provisioning new EC2 instance', 'assistance from pjha', 'creating EC2 instance']","['VPN access', 'user-specific identity-based policy']",POSITIVE,[] 2023-04-23T20:58:05.484719,2023-04-23T21:21:16.959919,aganivada,"{'aganivada', 'bganganna'}",2,- The team is discussing running tests for permissions validation during the regression cycle. - Some refactoring and changes have been made to RBAC to enable reporting on the vendor side. - It is suggested to run tests for both customer side and vendor side scenarios if there is enough time. - The collaborators flow is considered more important in the context of the RBAC change on either side.,3,"['running tests for permissions validation', 'refactoring and changes to RBAC', 'enabling reporting on the vendor side', 'running tests for customer side scenarios', 'running tests for vendor side scenarios', 'importance of collaborators flow in RBAC change']",[],,[] 2023-01-16T08:57:08.254499,2023-01-16T14:34:21.148199,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,"- The team is discussing the possibility of adding an API in CoreData to reduce round trips. - The call to the core cannot be moved because it requires information about the organization from tenancy. - The suggestion is to update the tenancy call to return open invitations, reducing the two calls to one paginated API. - There is a suggestion to split the UI into two components: open invitations and active customers. - The team is considering a tabbed UI for current and invited customers, as it may be better if there is no filtering required.",3,"['Adding API in CoreData', 'Updating tenancy call', 'Splitting UI into two components', 'Tabbed UI for current and invited customers']","['Round trips', 'Two calls to one paginated API', 'Filtering required']",UNKNOWN,[] 2023-03-20T07:18:17.804749,2023-03-20T07:23:08.802149,aganivada,"{'aganivada', 'pjha'}",2,"- Aganivada is asking Pjha about their progress on load-balancer research. - Pjha mentions that they have not made any progress yet because they were dealing with other issues related to bringing up multiple instances. - Pjha states that now that those issues are fixed, they will start looking into the load-balancer.",2,['load-balancer research'],[],,[] 2023-08-24T05:02:43.301209,2023-08-24T05:17:13.442019,aganivada,"{'aganivada', 'pjha'}",2,- The GL-runner instance experienced CPU usage crossing 100%. - There is a suggestion to reduce the number of parallel pipelines running to address the high CPU usage. - There is also an alert for disk space. - CloudWatch shows that 7 jobs were running when the CPU crossed the threshold. - It is agreed that reducing the count of parallel jobs to 5 would help address the high CPU usage.,7,"['Reducing parallel pipelines', 'Agreement on reducing count of parallel jobs']","['High CPU usage', 'Alert for disk space']",POSITIVE,[] 2023-07-31T21:35:15.078989,2023-07-31T23:35:42.081119,svummidi,"{'svummidi', 'rsrinivasan'}",2,- The askAI celery service is not picking up the latest builds and the checksum from the build does not match the checksum in ECS. - The issue may be caused by changes to use custom ECR. - The latest develop branch build has been deployed. - The task execution is using old code and the IP for the worker is different. - There is suspicion that the request is going to a different cluster when submitted via the estimation service. - The estimation service's askai specification client is not synced with the latest askai server. - A build of the estimation service will be triggered and deployed. - There are no logs with the task ID in the error messages. - Using Swagger to submit requests to askAI shows logs with the task ID.,25,"['Latest builds not picked up by askAI celery service', 'Latest develop branch build deployed', 'Estimation service build triggered and deployed', 'Swagger shows logs with task ID']","['Checksum mismatch between build and ECS', 'Issue with custom ECR', 'Task execution using old code and different worker IP', 'Request going to different cluster via estimation service', 'askai specification client not synced with askai server', 'No logs with task ID in error messages']",NEGATIVE,[] 2023-07-28T07:39:52.578209,2023-08-15T22:12:54.633499,aganivada,"{'aganivada', 'rtaraniganty', 'askumar'}",3,"- There was an issue with the job-manager service, which restarted automatically due to high CPU usage and picked up the latest code. - The latest code expected an environment variable for Microsoft account refresh, which was not present. - To avoid deploying to production from the develop branch, the main branch was updated and deployed. - There is still an issue with a data job, which will be addressed separately. - The job-manager service is publicly available on port 8080 and is being attacked occasionally in production and other environments. - The suggestion is to enable VPN validation and set `apigw_enabled` to true to secure the service. - There is a reported similar issue with a PHP script, which may be related to a malicious attack on a Django website. - The team agrees to enable API Gateway and VPN protection for the job executor service. - It is confirmed that the job-manager service is used for Auth0 backups, systest cleanup, and Microsoft account refresh. - Auth0 backups are running successfully, but the data cleanup job is failing due to a missing 'imply_password'. - The job-manager access issue has been fixed in all three environments, and random GET calls should no longer reach the service.",15,"['job-manager service restarted automatically', 'latest code picked up', 'main branch updated and deployed', 'enable VPN validation', 'set apigw_enabled to true', 'enable API Gateway and VPN protection', 'Auth0 backups running successfully', 'job-manager access issue fixed']","['high CPU usage', 'missing environment variable', 'issue with data job', 'job-manager service being attacked', 'malicious attack on Django website', 'data cleanup job failing']",MIXED,[] 2023-06-28T06:19:11.883969,2023-06-28T06:54:33.760079,askumar,"{'aganivada', 'askumar'}",2,"- The discussion is about removing RDS instances in the INT environment since all the service schemas have been moved to an Aurora cluster. - Aganivada suggests giving a few days for commons before deleting the RDS instances completely. - There is a question about whether there are backups of other databases in S3. - Askumar agrees to keep the RDS instances for the next week and mentions that there are snapshots of the DB that can be restored. - Askumar also mentions taking backups to S3, but Aganivada suggests that only the latest backup might be sufficient. - Askumar agrees and mentions that the latest backup of all RDS instances' data is needed.",5,"['RDS instances removal', 'Aurora cluster migration', 'Snapshots', 'Backups to S3']",[],,[] 2023-04-11T02:47:45.460379,2023-04-11T20:33:57.409389,aganivada,"{'aganivada', 'ppant'}",2,- There is a bug in the code that needs to be fixed. The fix involves converting the domain to lowercase before comparison. - The `/publisher/all/{tenantId}` API is currently case sensitive for the domain in the query filter. - The suggested fix is to convert the domain to lowercase before passing it to the core. - The code does not currently convert the case when storing the data. - There is a suggestion to use RSQL ignore case or the like operator for the fix.,7,"['bug fix', 'domain conversion', 'RSQL ignore case', 'like operator']","['case sensitivity', 'code bug']",POSITIVE,['PLAT-1878'] 2023-06-02T05:22:47.882089,2023-06-02T10:50:31.384969,ppant,"{'rtaraniganty', 'ppant'}",2,"- The command `npx aws-cdk@2.x init app --language typescript` installs CDK v2 version 2.79.1 by default. - Some core stacks are deployed with CDK v2 version 2.82.0, causing a version mismatch issue. - The version mismatch issue manifests as a failure in the synth and deployment process with an error message indicating a CDK CLI and library compatibility problem. - One solution to fix the version mismatch is to replace the alias `cdk=""npx aws-cdk@2.x""` with `cdk=""npx aws-cdk@2.8.0""` in the common-core.sh script. - The version of npx installed on different machines can result in different CDK versions being installed when using the `cdk==2.x` command. - Explicitly setting the version to 2.82.0 in the common script resolves the issue. - It is unclear how to upgrade the CDK CLI to the latest version when installing CDK v2 via npx. - The version mismatch issue only affects the core stack, not the app services. - The full output of the synth failure error message is provided. - It is suggested to install CDK v2 globally and update the instructions, as npx and aliases were used to handle both CDK v1 and v2 in the past. - Installing CDK v2 globally and removing the aliases resolves the issue.",14,"['CDK v2 installation', 'Version mismatch issue', 'Fixing version mismatch', 'Explicitly setting CDK version', 'Upgrading CDK CLI', 'Installing CDK v2 globally', 'Removing aliases']",['Synth failure error message'],POSITIVE,[] 2023-07-20T02:54:20.478529,2023-07-20T02:59:52.453389,aganivada,"{'aganivada', 'ppant'}",2,"- The discussion is about adding permissions to invitations. - Initially, only accountAdmin and productAdmin invites were given permissions. - The reason for skipping orgadmin invites is not mentioned. - The CISO dashboard requires the ability to associate orgadmin invites with permissions. - Line 312 was added in the screenshot to accommodate this requirement. - The suggestion is made to remove the check for roles and test it with conditions. - It is agreed to remove the check later.",4,"['Adding permissions to invitations', 'CISO dashboard', 'Removing check for roles']","['Skipping orgadmin invites', 'Reason for skipping orgadmin invites']",POSITIVE,[] 2023-01-03T22:20:32.401649,2023-01-03T22:40:15.535679,pmangalapuri,"{'aganivada', 'pmangalapuri'}",2,"- The team is discussing a potential issue with Docker image overriding for a release version. - They mention that when they rebased the develop branch from main, they forgot to update to the next development version. - They are concerned that running the develop build might override the release version Docker image. - However, they realize that the develop branch always has ""-develop"" suffixed to it, so there should be no issue. - They mention that the version of the image is usually determined by project metadata, but for the UI project, it is updated manually. - The team confirms that they update the version manually for the UI project.",4,"['Docker image overriding issue', 'Rebasing develop branch', 'No issue with develop build', 'Updating version manually for UI project']",[],,[] 2023-02-14T01:00:31.962859,2023-02-14T02:14:56.430969,mnirmal,"{'aganivada', 'mnirmal', 'hchintamreddy'}",3,- Mnirmal requested to update the state of a pulse survey user instance and pulse survey question response in the database. - Hchintamreddy reviewed the queries and confirmed that they are good. - Mnirmal asked Aganivada to execute the queries on the production environment. - Aganivada confirmed that the queries have been executed successfully. - Two records were updated in the pulse_survey_user_instance table. - No records were updated in the pulse_survey_question_response table.,5,"['queries executed successfully', 'pulse_survey_user_instance updated']",['pulse_survey_question_response not updated'],,[] 2023-06-12T22:18:15.302029,2023-06-13T09:44:21.170239,hchintamreddy,"{'rtaraniganty', 'askumar', 'hchintamreddy'}",3,- There are indexing failures for production logs. - The failures seem to be common across multiple lambdas. - ppant is investigating the issue. - Prabhu raised a merge request (MR) related to this issue.,4,"['indexing failures', 'common failures', 'investigation', 'merge request']",[],,[] 2023-07-14T01:54:25.208239,2023-07-14T01:58:46.992009,aganivada,"{'aganivada', 'pjha'}",2,- Aganivada is asking if GitLab allows updating the default maximum TTL (Time to Live) for a job beyond 1 hour. - Pjha is unsure but offers to check if it is possible to change the default TTL. - Pjha confirms that the default timeout can be modified. - Aganivada expresses gratitude for the information provided by Pjha.,5,"['GitLab', 'updating default maximum TTL', 'default timeout can be modified']",[],,[] 2023-03-26T23:47:51.105509,2023-03-27T22:05:18.379799,pkarthikeyan,"{'aganivada', 'rtaraniganty', 'pkarthikeyan', 'svummidi', 'askumar', 'ppant'}",6,"- Perimeter 81 was experiencing connectivity issues, but it seems to be resolved now. - Some users were unable to connect to RDS when Perimeter 81 was connected, but it is not a high priority issue. - There were issues with using sshuttle to connect to Prod Postgres from VPN, but it should work seamlessly in all environments. - The VPN currently provides private access to int and stage environments, but not prod. - There may still be intermittent issues with Perimeter 81. - Test failures may be occurring due to changes in the VPN branch that haven't been merged to develop yet.",10,"['connectivity issues resolved', 'sshuttle works seamlessly']","['unable to connect to RDS', 'intermittent issues with Perimeter 81', 'test failures']",,[] 2023-03-31T11:17:59.473949,2023-04-01T02:36:31.609259,rsrinivasan,"{'aganivada', 'rsrinivasan'}",2,- The discussion is about implementing just-in-time provisioning in Auth0. - The goal is to automatically assign a group/role to a user in Auth0 so they can login and view the dashboard as a collaborator without the need for an orgadmin to manually add them in the UI. - The user is requesting to add this feature to the backlog for further exploration and discussion. - The person responding acknowledges that it may not be a straightforward implementation from Auth0's side. - They plan to sync up next week and add a platform ticket for this feature.,2,"['just-in-time provisioning', 'automatically assign group/role', 'dashboard collaboration']","['not straightforward implementation', 'further exploration and discussion']",,[] 2023-04-06T09:14:29.930769,2023-04-12T21:23:37.190539,rsrinivasan,"{'aganivada', 'gshenoy', 'rsrinivasan', 'rtaraniganty'}",4,"- The discussion is about fixing issues in multiple projects. - The build process involves running tests and deploying. - The .gitlab.ci.yml file needs to be updated to align with other services. - The platform-services-notification project is dumping a lot of text on the console, which is caused by the openapi generator. - The hibernate.show-sql property has been turned off. - The code coverage badge issue has been fixed in some of the projects. - The verbose true property has been removed in the Discovery project, resulting in the coverage badge showing up. - rtaraniganty has been actively working on fixing these issues. - gshenoy and rsrinivasan express their gratitude to rtaraniganty for his efforts.",6,"['fixing issues', 'updating .gitlab.ci.yml', 'fixing code coverage badge', 'removing verbose property', 'active work by rtaraniganty', 'gratitude to rtaraniganty']","['dumping text on console', 'code coverage badge issue']",,[] 2023-08-17T03:29:39.127169,2023-08-21T04:07:05.185019,aganivada,"{'aganivada', 'askumar', 'sjain'}",3,- Snowflake supports read-only access for the Product. - Sjain wanted access to the environment to check the schema and was granted access. - A user named Sanket was added with view-only access. - Access to production data cannot be restricted at the environment level due to the same tables being used for both INT and PROD data. - Sjain was able to add new tiles to the dashboard and Aryan was asked to review it. - Sanket's access to Snowflake was revoked.,12,"['Read-only access for the Product', 'Sjain granted access to check schema', 'Sanket added with view-only access', 'Sjain added new tiles to the dashboard', 'Aryan asked to review new tiles']","['Access to production data cannot be restricted at the environment level', ""Sanket's access revoked""]",MIXED,[] 2023-07-20T08:41:16.716119,2023-07-20T09:30:13.953459,aganivada,"{'aganivada', 'pjha'}",2,"- The metrics-manager service does not have VPN enabled. - The changes for the metrics service have been deployed. - The metrics service was recently merged to develop, causing the update to be missed.",2,['changes deployed for metrics service'],"['metrics-manager service does not have VPN enabled', 'update missed due to merge']",,[] 2023-03-29T10:49:16.981489,2023-03-29T21:09:25.940159,pjha,"{'aganivada', 'pjha'}",2,- There is a pull request (PR) to replace ECS IP with an ALB endpoint in Route53. - Aganivada suggests testing this change with a service in the int environment before merging. - Pjha agrees to test the change before merging.,3,"['PR to replace ECS IP with ALB endpoint', 'Testing change in int environment']",[],,[] 2023-06-14T16:29:52.370739,2023-06-14T19:54:15.125509,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,"- The user ""rtaraniganty"" is experiencing issues with regex not working for logs on the Stage and Prod accounts, but it is working on the INT account. - They suspect that someone may have requested a change in how Logz handles their logs, which could be causing the issue. - The user ""aganivada"" is not aware of any recent logz ticket raised, except for a harmless mapping issue related to ""mdc.tenantId"".",2,['regex working on INT account'],"['regex not working on Stage account', 'regex not working on Prod account', 'change in Logz handling logs', 'logz ticket raised']",,[] 2023-09-06T02:46:36.265039,2023-09-07T01:56:24.212979,aganivada,"{'aganivada', 'pmangalapuri', 'rvaidya'}",3,"- There is an issue with the admins page in pmangalapuri where clicking on re-send sends an invitation but does not trigger the delete invitation API from dashboard-app. - The team is discussing whether this behavior is expected and if it should be done from the UI or if apps can handle it. - aganivada suggests using the delete invitation API from tenancy or dashboard-app instead of handling it through the invite API, which would require additional changes to apps and UI.",4,[],"['issue with admins page', 're-send not triggering delete invitation API', 'behavior of re-send']",NEGATIVE,['PLAT-2522'] 2023-05-04T02:22:59.672009,2023-05-04T04:26:18.978129,ppant,"{'aganivada', 'rsrinivasan', 'ppant'}",3,"- The team is trying to customize the API Client generated by OpenApi codegen to route requests to a custom endpoint based on the tenant ID in the ThreadContext. - They created a Spring Component called `AxmApiClient` in `plat-libs-commons` to read endpoint configs from application.yml and return the basePath according to the tenant ID. - They added the webclient mustache template of openapi-codegen and its plugin in the tenancy's specification's pom.xml to modify the generated code of `ApiClient.java`. - They injected the bean of `AxmApiClient` in `ApiClient` using @Autowired and modified the `getBasePath` method to call `axmApiClient.getBasePath(str)` for routing logic. - The problem is that `AxmApiClient` is null in `ApiClient` when the application is run. - They tried using @DependsOn(""axmApiClient"") on `ApiClient` but it didn't fix the ordering issue. - One suggestion is to have a setter/getter in `ApiClient` instead of using @Autowired, so that the calling component or service can set the `AxmApiClient` bean. - The team wanted to explore if there is a solution at the library level to avoid changing the existing code. - It was discovered that the issue was with creating the `ApiClient` using the new keyword instead of as a bean, and using constructor injection instead of @Autowired resolved the problem.",8,"['customizing API Client', 'Spring Component', 'AxmApiClient', 'webclient mustache template', 'routing logic', 'setter/getter', 'constructor injection']","['null AxmApiClient', 'ordering issue']",POSITIVE,[] 2023-02-25T11:03:04.805629,2023-02-28T09:36:56.985229,rtaraniganty,"{'bganganna', 'araman', 'rtaraniganty'}",3,"- There are a number of bugs opened in February that do not have the Testing Complexity field populated. - The Testing Complexity field needs to be updated for the tickets created before adding this field. - It is suggested to make the Testing Complexity field mandatory, as long as there is a default value. - Araman agrees to make the Testing Complexity field mandatory.",7,"['Testing Complexity field', 'Making Testing Complexity field mandatory']","['Bugs without Testing Complexity field', 'Updating Testing Complexity field for old tickets']",POSITIVE,[] 2023-08-07T21:06:29.197369,2023-08-07T21:16:00.766129,aganivada,"{'aganivada', 'askumar'}",2,"- There is a thread that has been moved to a different location. - The thread may have been updated later. - The score provider is the only part that is false in the main. - The commit was merged on July 7th, but the develop branch shows a different date. - It is possible that the thread was manually updated later.",5,"['thread moved to different location', 'thread updated later', 'commit merged on July 7th', 'thread manually updated later']","['score provider false in main', 'develop branch shows different date']",,[] 2023-02-17T08:41:15.360069,2023-02-17T08:44:49.336819,araman,"{'aganivada', 'araman'}",2,"- Araman has raised questions and is waiting for replies on some changes. - Araman is asking if there are any fixes expected to be pushed today. - Aganivada is not aware of any fixes being pushed today, but mentions that the EA bulk upload has already been deployed. - Aganivada suggests waiting for confirmation from Product. - Araman expresses gratitude for the information.",3,"['questions', 'waiting for replies', 'information']","['no fixes expected', 'waiting for confirmation']",,[] 2023-04-12T08:51:44.421059,2023-04-12T22:09:53.007579,rvaidya,"{'aganivada', 'rvaidya', 'rsrinivasan', 'rtaraniganty', 'ppant'}",5,"- There is an error in the vendorapp in the INT environment related to accessing a Kinesis instance. - The error is caused by a lack of authorization for the ECS IAM role to perform the necessary actions on the Kinesis stream. - The ECS IAM role should have permissions to access the Kinesis instance if deployed via CDK. - The Kinesis stream name should start with ""int-*"" according to the IAM role permissions. - The service may not be picking up the configured Kinesis for cloud config events. - The application-int.yml file can be removed and the necessary configurations can be added to the application.yml file. - The stream tag should be inside the cloud tag in the application.yml file for the vendor dashboard. - The issue was resolved by removing the *-int.yaml files from the dev config and cleaning up the application.yaml file. - The stage and production config folders are fine.",20,"['error in vendorapp', 'lack of authorization', 'ECS IAM role', 'CDK deployment', 'Kinesis stream name', 'cloud config events', 'application.yml file', 'stream tag', 'vendor dashboard', 'issue resolved', 'dev config', 'cleaning up']","['INT environment', 'accessing Kinesis instance']",POSITIVE,[] 2023-02-16T21:42:29.393719,2023-02-16T21:45:25.524849,aganivada,"{'aganivada', 'ppant'}",2,"- Aganivada has added a task for customizing email templates during re-invite for the next sprint. - Ppant is willing to take up this task. - Ppant suggests passing a query parameter like `reinvite=true` and selecting the re-invite template. - Aganivada wants to check if it is possible through auth0, otherwise they will need to find a way to suppress it from auth0.",3,"['customizing email templates', 're-invite', 'query parameter', 're-invite template']",['suppress from auth0'],,['PLAT-1706'] 2023-03-06T15:35:52.152829,2023-03-08T11:03:08.923279,rtaraniganty,"{'rtaraniganty', 'rvaidya', 'hchintamreddy'}",3,"- The user is experiencing an error in IntelliJ when referring to a model class, but compilation from the command line is successful. - The user asks for tips or tricks to make IntelliJ behave correctly. - Another user mentions that they have also experienced this error in IntelliJ but don't worry about it if compilation is successful. - The error started occurring after a spring dependency update. - One user suggests defining the model class in requestBodies and referring to it.",6,"['tips or tricks for IntelliJ', 'compilation from command line', 'defining model class in requestBodies']","['error in IntelliJ', 'error after spring dependency update']",NEGATIVE,[] 2023-02-13T23:40:39.718969,2023-02-13T23:48:22.063079,rsrinivasan,"{'rsrinivasan', 'ppant'}",2,"- The discussion is about finding a better terminal than iTerm. - The user rsrinivasan suggests trying out a terminal called Warp. - The user ppant has tried using Warp and found it to be good. - However, ppant mentions that integrating tmux into Warp is difficult.",3,"['better terminal', 'Warp', 'good']","['iTerm', 'difficult', 'integrating tmux']",,[] 2023-08-10T03:38:39.190519,2023-08-10T08:58:20.753929,askumar,"{'aganivada', 'askumar'}",2,"- The team is discussing how to test ILM policies in the INT environment, which currently only has a single instance. - They are considering enabling cluster mode in INT to test the policies, but are concerned about the cost. - They are also discussing the possibility of creating a new cluster in INT, copying the data over, and pointing the audit log service to the new cluster for testing. - They are considering changing the index names to include a timestamp to make it easier to apply policies on a day-by-day basis. - They are discussing the cost of using ultrawarm instances for the cluster and the minimum number of nodes required. - They are considering reducing the number of nodes in the test cluster to minimize costs. - They are discussing the possibility of switching back to non-cluster mode after testing in INT/Stage. - They are considering copying the data over to one instance for testing, either through a script or manually. - They believe that the proposed solutions should be doable.",8,"['Testing ILM policies in INT environment', 'Enabling cluster mode in INT', 'Creating a new cluster in INT', 'Changing index names to include timestamp', 'Using ultrawarm instances for the cluster', 'Reducing number of nodes in test cluster', 'Switching back to non-cluster mode', 'Copying data over for testing', 'Proposed solutions are doable']","['Concerns about cost', 'Minimum number of nodes required']",UNKNOWN,[] 2023-05-16T10:12:51.549539,2023-05-16T10:37:45.095079,aganivada,"{'aganivada', 'askumar'}",2,- The discussion is about migrating data from one Snowflake instance to another. - One user suggests using SnowPipe to add data from S3 to the new account. - Another user mentions the suggestion of getting data into AWS S3 and copying it into the table. - There is a question about the ability to export and import config/scripts in Snowflake when provisioning a new account.,3,"['SnowPipe', 'Copying data into table']",['Export/import config/scripts'],,[] 2023-06-08T09:07:48.613329,2023-06-08T09:40:32.031969,aganivada,"{'aganivada', 'rtaraniganty'}",2,"- The team is planning to use point-in-time recovery for DynamoDB backups. - Point-in-time recovery provides 35 days of backup and backups are taken daily. - Adding a single line of code to the CDK will enable the backup feature for all DynamoDB instances. - The pricing for DynamoDB backups is not heavy, even if the backup size is 1GB per day. - The monthly cost for backups may have an overhead of $6. - A link to the documentation for point-in-time recovery in DynamoDB is provided. - There is a plan to test the recovery process for a table and document the steps.",4,"['point-in-time recovery', 'DynamoDB backups', 'backup feature', 'CDK integration', 'pricing', 'documentation', 'recovery process']",[],POSITIVE,[] 2023-07-06T19:50:50.047579,2023-07-07T03:10:32.615449,rsrinivasan,"{'aganivada', 'rsrinivasan', 'askumar'}",3,"- The issue is related to aliasing in the new service ""askai"" where it is pointing to the old RDS cluster instead of the new Aurora cluster. - The changes have been merged to the develop branch, so it should not be an issue. - It is suggested that the issue might be due to caching. - The stack is deploying two services internally, ""askai"" and ""askai-celery"", and creating two environments to overcome the issue. - The migration tag was added for ""askai"" but not for ""askai-celery"". - The CDK.json overrides were getting picked up for ""askai"" but not for ""askai-celery"". - The solution involved creating two CDK environments and making a one-line change in the Service stack. - There is no change in the baseApp stack, only in Seshan's app stack. - The two services use the same version from the pom file. - The docker image is published as ""askai-service"" and ""askai-celery"" and can run as a webservice or a celery background task.",11,"['changes merged', 'caching', 'two services created', 'CDK environments created', 'one-line change in Service stack', 'docker image published']","['aliasing issue', 'migration tag missing for askai-celery', 'CDK.json overrides not working for askai-celery', 'no change in baseApp stack']",NEUTRAL,['DAT-1062'] 2023-06-07T07:58:54.127609,2023-06-07T21:58:21.737899,aganivada,"{'aganivada', 'gshenoy', 'rtaraniganty', 'askumar'}",4,"- Aganivada encountered an S3 access denied error while trying to deploy a service/core stack to INT. - Askumar offered to help and successfully deployed the core data. - Aganivada suspected that their environment might have been the cause of the issue and planned to restart and check. - Askumar mentioned experiencing a similar S3 error in the past, which was resolved by using a VPN. - Aganivada decided to try deploying through a VPN as well. - Askumar discovered that a liquibase change was causing a conflict in the core data deployment. - Gshenoy suggested that the issue could be due to stale personas in INT and offered to investigate. - Aganivada asked if the issue was related to a query they ran in prod the previous day. - Gshenoy confirmed that the issue was related to the query and explained that INT was not cleaned up after deploying a hotfix directly in stage. - Askumar confirmed that the changelog went through after Gshenoy's intervention. - Aganivada mentioned intermittently encountering the CDK deployment issue and shared a GitHub thread discussing a potential solution. - Rtaraniganty asked about Aganivada's OS version and shared their experience of not encountering the CDK deployment issue. - Rtaraniganty suggested using pyenv to isolate and manage different Python versions. - Aganivada expressed interest in trying pyenv and thanked Rtaraniganty for the suggestion. - Rtaraniganty mentioned using Python 3.10 for the admin-gw service and switching versions with pyenv. - Aganivada noted that they and Askumar had encountered the CDK deployment issue, but it worked fine for others who recently deployed admin-gw.",42,"['successful deployment', 'resolved S3 access denied error', 'resolved liquibase conflict', 'resolved stale personas issue', 'suggested use of VPN', 'suggested use of pyenv', 'interest in trying pyenv']","['encountered S3 access denied error', 'suspected environment issue', 'encountered CDK deployment issue']",MIXED,"['PLAT-2037', 'DISC-3094']" 2023-04-25T23:24:27.147919,2023-04-26T00:02:13.773329,aganivada,"{'aganivada', 'pmangalapuri'}",2,- Aganivada updated the deployment dependency template to include i18nexus migration for all future prod deployments. - Pjha made the update to the template. - The template can be viewed on the Atlassian wiki. - Pmangalapuri thanked Aganivada for the update.,2,"['deployment dependency template update', 'i18nexus migration']",[],,[] 2023-04-03T09:09:33.801539,2023-04-03T16:30:39.145419,aganivada,"{'aganivada', 'anair'}",2,- The SSO integration is now working properly after re-adding the certificate. - Pavan requested to send invitations to three users from his team to join as orgadmins. - Pavan suggested syncing updated usernames in the Teammates feature. - Pavan also suggested showing the email column by default or adding an option to display emails in the collaborators page. - Rsrinivasan and aganivada acknowledged Pavan's requests and mentioned that they will work on them.,3,"['SSO integration', 'invitations', 'syncing usernames', 'displaying emails']",[],,[] 2023-03-02T21:59:30.289689,2023-03-02T22:04:16.386179,pjha,"{'aganivada', 'pjha', 'rtaraniganty'}",3,"- pjha has successfully figured out how to work with lambda functions within a VPC. - They have associated a VPC, subnet, and security group to the lambda function. - They have made a call to the internal endpoint within the VPC and received a response from the API. - pjha is still working on the documentation and will share it once it's done. - aganivada asks pjha to share the lambda link. - aganivada also asks pjha to update the CDK for one of the existing lambda functions and move it into the VPC. - If the MR (Merge Request) passes, they can update other lambda functions as well. - rtaraniganty acknowledges pjha's good work.",6,"['successfully figured out how to work with lambda functions within a VPC', 'associated a VPC, subnet, and security group to the lambda function', 'made a call to the internal endpoint within the VPC and received a response from the API', ""acknowledged pjha's good work""]","['still working on the documentation', 'aganivada asks pjha to share the lambda link', 'aganivada asks pjha to update the CDK for one of the existing lambda functions and move it into the VPC', 'update other lambda functions if MR passes']",,[] 2023-02-28T14:29:28.389459,2023-03-02T22:03:29.875339,rtaraniganty,"{'aganivada', 'rtaraniganty', 'ppant'}",3,"- The team is experiencing issues with debug logs in the cloud config and is looking for a tool to help remember when debug logs are turned on. - There is a concern about the ability to load log4j on the fly without having to bounce the system. - A bug in the production environment caused the team to exceed their quota due to debug logs being enabled. - The Logz.io platform provided some additional log capacity to help resolve the issue. - The team discusses the possibility of setting up precommit hooks for log4j2.xml and automated rollback of git commits. - There is a discussion about the configuration of log4j and whether the log file is being pushed to the node or if there is a significant delay. - The team considers using LS (likely referring to Logstash) as a temporary tool to notify on-call if debug logs are enabled in the production environment. - It is discovered that services fully integrated with cloud config are able to pull log config changes without restart, while services not fully integrated require a restart. - The team plans to investigate the integration of Collab services with cloud config to ensure they are functioning correctly. - A bug related to log4j2 not refreshing configurations even with the monitorInterval set is mentioned, and a link to the bug report is provided. - The team discusses changing the log level to debug in some services to gather more information about the monitoring process.",22,"['debug logs', 'tool for remembering debug logs', 'additional log capacity', 'precommit hooks for log4j2.xml', 'automated rollback of git commits', 'integration of Collab services with cloud config', 'changing log level to debug']","['issues with debug logs', 'concern about loading log4j on the fly', 'exceeding quota due to debug logs', 'bug in production environment', 'configuration of log4j', 'significant delay in log file pushing', 'bug related to log4j2 not refreshing configurations']",,[] 2023-02-17T09:09:30.772179,2023-02-20T03:16:35.359889,rtaraniganty,"{'araman', 'rtaraniganty'}",2,- The discussion is about deploying version 0.9.7 rel1. - The deployment is dependent on knowing the root cause analysis (RCA) for an issue discussed in a Slack conversation. - The deployment is planned for Monday noon.,2,"['deployment', 'version 0.9.7 rel1']","['root cause analysis', 'issue', 'Slack conversation']",UNKNOWN,[] 2023-07-22T14:02:40.859809,2023-07-24T20:32:50.217879,rtaraniganty,"{'aganivada', 'rsrinivasan', 'rtaraniganty'}",3,"- RDS spend in the INT account has improved compared to a few months ago, partially due to the move to Aurora V2. - There has been an increase in RDS usage in the last few days, particularly on the PM side. - There was a spike in estimation queries on the 18th, but the sustained increase is on the PM side. - There are discussions about measuring performance insights and cost trends week over week, focusing on key changes and reporting on them. - There is a suggestion to configure CUR (Cost and Usage Report) and run a GL (General Ledger) job every week to analyze and report on the data. - There is a need to determine if CUR needs to be set up once per environment or if it can be done for multiple accounts from one place.",13,"['Improved RDS spend in INT account', 'Move to Aurora V2', 'Increase in RDS usage', 'Measuring performance insights', 'Cost trends week over week', 'Configuring CUR', 'Running GL job', 'Analyzing and reporting data', 'Setting up CUR per environment', 'Setting up CUR for multiple accounts']","['Spike in estimation queries', 'Sustained increase on PM side']",,[] 2023-07-27T22:53:28.265049,2023-07-27T22:53:38.883449,aganivada,{'aganivada'},1,"- The user ""rtaraniganty"" mentioned that the P81 platform automatically logs out users after 30 days. - ""aganivada"" suggested updating the automatic logout time to 60 days, which is the maximum allowed time. - ""aganivada"" provided a link to edit the configuration related to this issue.",2,[],[],NEUTRAL,[] 2023-08-03T09:27:14.249959,2023-08-04T09:43:40.056369,rsrinivasan,"{'rsrinivasan', 'rtaraniganty'}",2,- Celery supports both horizontal and vertical scaling. - A Celery system can have multiple workers and brokers for high availability and horizontal scaling. - The team is discussing how to apply horizontal scaling to their setup and whether to auto-scale or scale up manually. - There is a concern about combining horizontal scaling with the use of SQS. - The team has configured autoscaling based on the length of pending messages in the SQS queue. - The autoscaling configuration includes specific steps for adding or removing workers based on the number of messages in the queue. - The team has not yet tested the autoscaling configuration. - The team wants to ensure that the scaling works properly and that there are no issues with runaway scaling. - The team has verified the scaling and made fixes to ensure it works correctly. - Documentation has been created to explain the autoscaling configuration.,9,"['Celery supports horizontal and vertical scaling', 'Multiple workers and brokers for high availability and horizontal scaling', 'Configured autoscaling based on pending messages in SQS queue', 'Verified scaling and made fixes', 'Documentation created for autoscaling configuration']","['Concern about combining horizontal scaling with SQS', 'Autoscaling configuration not yet tested', 'Worries about runaway scaling']",POSITIVE,[] 2023-01-11T15:06:51.493089,2023-01-12T20:19:55.566259,rtaraniganty,"{'aganivada', 'askumar', 'rtaraniganty', 'ppant'}",4,"- The user rtaraniganty is unsure if the Job executor service is running as expected and if it is successfully executing the required jobs. - The user aganivada suggests asking askumar and ppant to review the health of the job executors. - Ashwani, who is currently on-call, may also look into the issue. - The project related to the job executor service can be found at the given GitLab link.",7,[],[],,[] 2023-06-20T00:15:12.949629,2023-06-22T22:02:30.243169,ppant,"{'rtaraniganty', 'ppant'}",2,- The discussion is about reviewing a document containing the structure of a multi-instance config in SSM. - There is a suggestion to add the name of the user uploading the config and the time of upload in the JSON. - There is a request to add an option to read and print the parameter. - The script should be added in the cdk-artifacts repo in the apps/scripts directory. - There is a suggestion to fetch all SSM parameters for multi-instance if no tenant is given. - The suggestion is to create a JSON with parameters for each tenant. - The ability to pipe the output to jq is mentioned. - There is a discussion about how to fetch SSM parameters using regex or brute force.,10,"['Reviewing document', 'Adding user and time info in JSON', 'Adding read and print option', 'Adding script in cdk-artifacts repo', 'Fetching SSM parameters for multi-instance', 'Creating JSON for each tenant', 'Piping output to jq']",['Fetching SSM parameters using regex or brute force'],,[] 2023-03-21T00:26:54.060059,2023-03-21T09:14:12.192259,pjha,"{'aganivada', 'pjha', 'rtaraniganty'}",3,"- The current setup involves one backend-service calling another through a private endpoint that routes to ECS Tasks. - The Route53 values are currently set to the ECS task IP address instead of using a LoadBalancer. - There is a desire to have condition rules in place to route to different versions of the backend-service, which can be achieved through a LoadBalancer based on HTTP headers. - It seems that requests are not going through the LoadBalancer, and instead, Route53 is directly calling the ECS task on the private domain. - There is a suggestion to use DNS load balancing for within VPC access and potentially switch to going via the LoadBalancer. - It is recommended to test with a couple of services manually before updating CDK (Cloud Development Kit). - The LB (LoadBalancer) needs to be registered with the service name, potentially using Cloud Map. - The AWS Cloud Map can create a Route 53 alias record that routes traffic to an Elastic Load Balancing load balancer. - The CDK v1 reference should be consulted for implementing the desired setup.",14,"['private endpoint routing', 'condition rules for routing', 'DNS load balancing', 'testing with services manually', 'registering LB with service name', 'using Cloud Map', 'consulting CDK v1 reference']","['Route53 values set to ECS task IP address', 'requests not going through LoadBalancer', 'Route53 directly calling ECS task']",UNKNOWN,[] 2023-07-13T23:16:38.223649,2023-07-14T00:48:06.555019,rsrinivasan,"{'aganivada', 'rtaraniganty', 'hsingh', 'rsrinivasan', 'akasim', 'askumar'}",6,- rsrinivasan has injured his left hand and has a cartilage tear. - He will be connecting from home until he recovers. - The injury was caused by a bike accident. - The team members are expressing their concern and wishing rsrinivasan a speedy recovery.,7,"['speedy recovery', 'concern']","['injury', 'cartilage tear', 'bike accident']",,[] 2023-04-12T09:38:59.108249,2023-04-12T21:21:06.825409,askumar,"{'aganivada', 'askumar'}",2,"- The discussion is about the approach for data enrichment of snowflake data, specifically creating a map of TenantId to Name and UserId to Name. - Two approaches are being considered: - Approach 1 involves a Lambda function enriching the event by making a call to the backend for every event received before pushing the data. - Approach 2 involves Snowflake having a table with key value maps. There are two sub-approaches within this: - Sub-approach a involves a different Lambda function pushing this data via a job to Snowflake. - Sub-approach b involves taking a dump from the database and uploading the file manually or via a script. - There is a related discussion thread provided for further information. - Aganivada suggests using the same lambda function that pushes other events to also push tenant creation and user creation events. - Askumar agrees that the same lambda function can be used if the events come in the form of an event.",3,"['data enrichment', 'approach 1', 'approach 2', 'sub-approach a', 'sub-approach b', 'lambda function', 'Snowflake table', 'key value maps', 'dump from database', 'uploading file', 'tenant creation events', 'user creation events']",[],,[] 2023-02-10T03:45:28.861739,2023-02-13T19:59:27.140789,rvaidya,"{'aganivada', 'rvaidya', 'rsrinivasan', 'svummidi', 'askumar', 'ppant'}",6,"- There is a weird issue with one of the app calls in the STAGE environment. - The app is calling multiple downstream APIs with different users and tenants, causing permission issues. - The issue is intermittent but has occurred multiple times in the last few days. - The UserContextInterceptor is not properly clearing the request context attributes, leading to the use of incorrect user and tenant information. - The issue is more likely to occur when threads get stuck due to failed users and additional requests are sent to the same thread. - The root cause of the issue is that the postHandle method in the UserContextInterceptor is not invoked when there is an error during execution. - The fix for the issue is to use the afterCompletion method instead of postHandle to cleanup the context in MDC. - There is a suggestion to enable debug logging for a specific tenant in the STAGE environment. - The issue has not been observed in the production or integration environments. - A hotfix with the necessary changes will be deployed to address the issue. - There is a question about why the MDC context is used instead of the request header for querying the database. - The MDC context is used to check if the user is allowed to read a product instance, and if the header is forged or cached, the backend will not be able to determine the original caller. - There is a plan to test the fix and cover all cases of using stale user context from previous requests to avoid security issues.",30,"['intermittent issue', 'fix deployment', 'debug logging', 'hotfix deployment', 'testing fix']","['weird issue', 'permission issues', 'incorrect user and tenant information', 'failed users', 'error during execution', 'MDC context usage', 'security issues']",NEGATIVE,['PLAT-1682'] 2023-04-13T18:25:34.634889,2023-04-13T23:14:36.658099,ppant,"{'aganivada', 'hchintamreddy', 'ppant'}",3,"- The CDK apps stack is dependent on parameter stores from SSM (Systems Manager). - The specific parameter store being discussed is ""sqs-lambda-listener"". - The ""sqs-lambda-listener"" parameter is being injected by the CDK because it may have reused some core library references of common core stacks. - The ""sqs-lambda-listener"" parameter is applied to all apps stacks, including the tenancy stack. - The ""sqs-lambda-listener"" parameter is used by the ""message-broker"" service. - When the ""sqs-lambda-listener"" parameter was deleted, the synth/deploy process for all apps stacks failed.",6,"['CDK apps stack', 'parameter stores', 'SSM', 'sqs-lambda-listener', 'core library references', 'message-broker service', 'synth/deploy process']",['failed synth/deploy process'],,[] 2023-03-21T08:26:46.904519,2023-03-21T09:15:13.868229,pjha,"{'pjha', 'rtaraniganty'}",2,- The discussion is about finding documentation for auto-deploying a Docker image to the ECR (Elastic Container Registry). - The documentation is available on the Atlassian Confluence page. - The next step in the discussion is to determine what the lambda function would look like and what permissions are required.,2,['documentation available on Atlassian Confluence page'],['finding documentation for auto-deploying Docker image to ECR'],NEUTRAL,[] 2023-02-25T10:02:56.055279,2023-03-24T19:48:11.318389,araman,"{'aganivada', 'rvaidya', 'akasim', 'bganganna', 'araman', 'rtaraniganty'}",6,"- There is an issue with a tenant facing a 401 (auth0 exception) while navigating to the Actions page on stage. - The general dashboard for this tenant is working fine. - There were similar issues with loading the vendor dashboard, but it worked fine after reloading. - The issue seems to be related to fetching organization details of deleted or non-existent orgs in auth0. - The tenant in question is 17875 and the action creation was working fine until version 0.9.8. - There may be old connections with deleted tenants causing the issue. - There are two potential issues: handling tenant deletion gracefully in Apps and handling tenant deletion in UAS. - Some tenants that were deleted still have mappings in MB/UAS, causing attempts to fetch info from the deleted tenants and resulting in failures. - The tenants causing issues are 16993, 17137, and 17139. - It is unclear if there were any issues with deleting these tenants. - The tenant name is now displayed in user actions, which may be causing the issue. - The issue will be fixed in version 0.9.9 to handle deleted tenants in the connected-products flow. - The tenant deletion process will be re-triggered to ensure proper cleanup. - The fix should prevent similar issues in production customers.",29,"['general dashboard working fine', 'vendor dashboard working fine', 'issue fixed in version 0.9.9', 'fix prevents similar issues']","['401 (auth0 exception) while navigating to Actions page', 'issue with fetching organization details of deleted/non-existent orgs', 'old connections with deleted tenants causing issue', 'handling tenant deletion gracefully in Apps', 'handling tenant deletion in UAS', 'deleted tenants still have mappings in MB/UAS', 'unclear if there were issues with deleting tenants', 'tenant name causing the issue', 'tenant deletion process needs re-triggering', 'potential failures in fetching info from deleted tenants']",NEGATIVE,[] 2023-02-16T08:34:57.122469,2023-02-16T09:12:37.708189,rtaraniganty,"{'aganivada', 'rtaraniganty', 'rvaidya'}",3,"- The discussion is about finding the fastest way to get a tenant ID given a name. - The suggestion is to use the API endpoint at with an ilike rsql query filter. - It is confirmed that the API supports rsql. - There is a link to a GitLab merge request, but without further information, it is unclear how it relates to the discussion.",5,"['fastest way to get tenant ID', 'API endpoint supports rsql']",['unclear GitLab merge request'],,[] 2023-05-16T05:54:29.302749,2023-05-16T06:29:33.501779,aganivada,"{'aganivada', 'askumar'}",2,"- There is a change in CDK being made to use the same secret ID instead of creating a new parameter in secret manager. - The change will be completed by tomorrow. - Pjha is working on changing the password. - The migration plan document will be finished today. - The migration for Core will be done before the branch cut. - A plan will be put in place for Core in the stage as well, with additional steps if necessary.",6,"['change in CDK', 'use same secret ID', 'change completed by tomorrow', 'migration plan document finished today', 'migration for Core', 'plan for Core in stage']","['creating new parameter in secret manager', 'Pjha working on changing password', 'additional steps if necessary']",,[] 2023-08-10T05:18:33.833899,2023-08-22T03:24:01.112699,pjha,"{'aganivada', 'pjha', 'rtaraniganty'}",3,"- The writer instance and reader instance for the collab database have been launched from the latest snapshot in production. - The instances use the same username and password, and have been accessed successfully from the local terminal. - The team is planning to have a button in Retool to update the current timestamp in a record in the collab database. - This change will be made directly from the admin-gw to the cloned collab database instance. - The team plans to test the upgrade today and verify the operation from Retool. - During the upgrade, there was downtime of around 20 minutes with 0 connections, but once the upgrade completed, the connections were established back without making any changes in the application. - There were 500 errors during the upgrade. - The application was able to route connections automatically once the database was up, without the need to restart the services. - A support ticket has been raised to inquire about the memory configuration for a smooth upgrade.",19,"['Writer instance launched', 'Reader instance launched', 'Accessed successfully from local terminal', 'Upgrade tested and verified', 'Connections established after upgrade', 'Connections routed automatically', 'Support ticket raised']","['Downtime during upgrade', '500 errors during upgrade']",NEUTRAL,[] 2023-08-23T16:03:19.469639,2023-08-23T20:15:55.580629,rtaraniganty,"{'aganivada', 'araman', 'rtaraniganty'}",3,"- The Stage Retro tenant has become a tenant with no products. - The data is needed to prepare the Retro summary for the all hands meeting. - Satya tried to invite rtaraniganty to the tenant, but there is no option for multiple tenants after login. - rtaraniganty is associated with only one organization, so the option to select an organization is not shown. - aganivada invited operations@axiamatic.com as a vendor, but it caused issues. - rtaraniganty is now able to see the tenant details. - There is a suggestion to try using incognito mode.",10,"['rtaraniganty able to see tenant details', 'suggestion to try using incognito mode']","['Stage Retro tenant has no products', 'no option for multiple tenants after login', 'option to select organization not shown', 'issues caused by inviting operations@axiamatic.com as vendor']",,[] 2023-07-24T14:08:09.947669,2023-07-24T23:19:17.958989,anair,"{'anair', 'askumar'}",2,- Anair is requesting to add PacDen/17160 and Cisco/18096 to the snowflake dashboard. - Askumar agrees to add the requested tenants. - Askumar confirms that the tenants have been added.,3,['Add tenants to snowflake dashboard'],[],,[] 2023-06-22T04:54:48.524229,2023-06-23T04:53:25.167869,askumar,"{'aganivada', 'rtaraniganty', 'askumar', 'ppant'}",4,"- The team is trying to migrate a schema from an RDS instance to Aurora RDS writer. - They are facing an issue where only the referenceId in the table is being migrated but the referred blob is not being migrated. - They suspect that the issue is related to the ""lo_compat_privileges"" parameter, which controls access to the table pg_largeobject and pg_largeobject_metadata. - They want to turn on the ""lo_compat_privileges"" parameter to enable the migration of blobs. - There may be compatibility issues with applications that rely on the default behavior/permissions if the parameter is turned on. - AWS is not allowing them to change the ""lo_compat_privileges"" parameter, so they have raised a support ticket with AWS. - They are considering alternative solutions if AWS does not allow them to update the flag. - They have tried to migrate the blobs using a Python script, but it did not work for HTML blobs for the notification service. - They have a workaround for inserting HTML templates via a brute force POST API call. - They discuss the possibility of publishing the templates again in the int environment. - They mention that the notification service is also migrated to Aurora after migrating the notification templates with a script. - They express gratitude to the team members who helped with the migration.",20,"['successful migration', 'gratitude to team members']","['issue with blob migration', 'compatibility issues', 'AWS support ticket', 'alternative solutions', 'Python script not working', 'workaround for HTML blobs', 'publishing templates in int environment']",MIXED,[] 2023-05-09T16:01:43.692929,2023-05-09T22:30:37.122199,rtaraniganty,"{'aganivada', 'pjha', 'rtaraniganty'}",3,"- The team is experiencing issues with deploying services using CDKv2 code. - There is an error message stating that the authorizer name must be unique and that the authorizer already exists in the RestApi. - The team suspects that there may have been changes related to authorizers for multi-instance work, but it is confirmed that no such changes have been made. - The team discusses the differences in metadata and naming conventions between CDKv1 and CDKv2 for authorizers. - They make a manual adjustment to the authorizer name in the code to match the CDKv1 naming convention. - The deployment is successful after making this adjustment. - The team discusses the need to deploy all services once before merging to develop to ensure compatibility with CDKv2. - They also discuss the possibility of running both CDKv1 and CDKv2 on the same box for testing purposes. - The team plans to deploy all services and test CDKv2 during the weekend.",60,"['successful deployment', 'manual adjustment', 'compatibility with CDKv2', 'testing CDKv2']","['issues with deploying services', 'error message', 'differences in metadata and naming conventions', 'running both CDKv1 and CDKv2 on the same box']",,[] 2023-02-27T19:34:10.894129,2023-02-27T22:04:11.662459,rsrinivasan,"{'aganivada', 'rsrinivasan'}",2,"- rsrinivasan has gained access to Clarity, an internal monitoring tool for Imply. - rsrinivasan currently has access to the stage and int clusters and has requested access to the production cluster. - Clarity has a concept called alerts, similar to Lightstep, which supports webhooks to send alerts to Slack channels. - rsrinivasan plans to create a Jira ticket to create alerts for all environments. - aganivada is unsure if Opsgenie supports webhooks and wants to verify. - aganivada asks rtaraniganty to temporarily add either rsrinivasan or themselves as an admin to verify if Imply can be connected to Opsgenie.",5,"['rsrinivasan gaining access to Clarity', 'rsrinivasan requesting access to production cluster', 'Clarity alerts', 'Creating Jira ticket for alerts', 'Verifying if Opsgenie supports webhooks', 'Temporarily adding rsrinivasan or aganivada as admin to verify Imply connection to Opsgenie']",[],,[] 2023-05-24T09:10:58.529099,2023-05-25T23:30:33.235899,rtaraniganty,"{'mli', 'anair', 'mnirmal', 'araman', 'rtaraniganty'}",5,"- The team wants to change the CTA from ""Complete Pulse"" to ""Start Pulse"" in the gamification designs. - They also want to change the CTA on the FINAL Pulse page from ""Next"" to ""Submit"" to avoid implying that there are more questions to come. - The previous or back button for Slack was not available. - The request to update the copy to ""Start Pulse"" was made by rtaraniganty. - The ability to complete a half-finished Pulse will be introduced, with the CTA being ""Resume Pulse"". - mnirmal will update the Slack CTA based on the discussion.",7,"[""Change CTA from 'Complete Pulse' to 'Start Pulse' in gamification designs"", ""Change CTA on FINAL Pulse page from 'Next' to 'Submit'"", ""Introduce ability to complete half-finished Pulse with CTA 'Resume Pulse'"", 'Update Slack CTA']",['Previous or back button for Slack not available'],POSITIVE,[] 2023-05-26T05:31:04.759009,2023-05-26T10:05:56.606319,nsrivastava,"{'aganivada', 'pjha', 'nsrivastava'}",3,"- One of the jobs in SC (Sentiment Collector) got abruptly terminated with an error message. - There is a re-run of the job in the queue, but it is unclear if it was manually killed. - There was a similar behavior observed before with SC jobs, indicating a possible configuration issue. - The pipeline has been started to investigate the issue. - There was a sudden spike in CPU, which may be related to the job failure. - One test failed during the investigation, but it was triggered again to check the CPU spike. - The instance associated with SC is not responding and needs enhanced monitoring. - Alerts need to be set for resource shortage on the instance. - There was a change in the webhook configuration for SC.",12,"['re-run of job in queue', 'pipeline started to investigate issue', 'test triggered again to check CPU spike', 'enhanced monitoring needed for SC instance', 'alerts set for resource shortage', 'change in webhook configuration for SC']","['job abruptly terminated with error message', 'possible configuration issue with SC jobs', 'sudden spike in CPU', 'test failed during investigation', 'SC instance not responding']",NEGATIVE,[] 2023-08-17T02:18:55.187779,2023-09-04T23:33:19.343119,aganivada,"{'aganivada', 'bot_message', 'rvaidya', 'pmangalapuri', 'bganganna', 'amishra'}",6,"- There have been backend changes for the updated invitation flow in the application. - The in-flight invitation flow has been modified to only send one invitation instead of multiple invitations when a user is added to an engagement and hasn't onboarded yet. - Removing a user from an engagement should not result in deleting the invitation. - After onboarding, an API call needs to be manually invoked, but this flow should be automated once there are app and UI changes. - A 404 response is expected for the getStatus API call if the user does not have any open invitations or has an invitation added before the feature was deployed. - The app change for the updated invitation flow has been deployed in the INT environment for testing. - There is a need to refresh the page for all permissions to show up after closing membership, and there is a discussion about automating the page reload after closing membership. - There is a plan to add a flag in the info call to let the UI know that a page reload is needed. - A UI ticket has been created for the refresh token based on the flag from the info call. - There is a discussion about the flow of API calls at login time and the order in which they should be made. - A task has been created for the UI change related to refreshing tokens based on the flag from the info call. - The change related to refreshing tokens is not yet deployed in the INT environment. - The inviteMerged flag is part of the API response in the info call.",20,"['backend changes', 'in-flight invitation flow modification', 'refresh page for permissions', 'flag in info call', 'UI ticket for refresh token', 'API call flow at login time', 'task for UI change']","['deleting invitation when removing user', '404 response for getStatus API call', 'automating page reload after closing membership', 'refresh token change not deployed']",NEUTRAL,"['UI-2605', 'UI-2605']" 2023-05-03T13:56:23.857809,2023-05-04T21:12:33.222189,rtaraniganty,"{'aganivada', 'rtaraniganty', 'mnirmal'}",3,"- The release of version 0.9.10 has been completed. - A hotfix (HF) branch has been created for future deployments. - Mohith committed code to the release/0.9.10 branch, but it needs to be moved to the HF branch. - The release/0.9.10 branch has been deleted. - The cdkv2 changes can now be pushed to the develop branch. - The HF branch will be used for all HF deployments. - Tags have been published for version 0.9.10 in the cdk-artifacts project for potential issues after the cdkv2 changes.",9,"['Release of version 0.9.10 completed', 'Hotfix (HF) branch created', 'CDKv2 changes can be pushed to develop branch', 'Tags published for version 0.9.10 in cdk-artifacts project']","['Code needs to be moved from release/0.9.10 branch to HF branch', 'Release/0.9.10 branch deleted']",,[] 2023-02-22T21:39:26.792109,2023-02-22T22:51:19.408599,rvaidya,"{'rsrinivasan', 'rvaidya'}",2,"- The discussion is about whether the notification service supports attachments, specifically PDF files, to be sent along with emails. - Rsrinivasan confirms that the notification service does support attachments. - Attachments can be sent as inline attachments with base64 encoded data or as an S3 path. - For S3 attachments, the notification service reads from a predetermined bucket. - To send attachments via S3, the file needs to be written in the predetermined bucket and the path needs to be passed. - Rsrinivasan provides a link to system test cases for sending attachments with the notification service.",2,[],[],,[] 2023-07-03T22:46:11.430299,2023-07-04T03:34:37.710679,pjha,"{'aganivada', 'pjha', 'askumar'}",3,"- RDS Proxy is being discussed and reviewed. - It has been configured for a test database but not enabled for the actual one. - The team wants to configure it for the int database to check if they are getting the desired stats. - There is a question about whether the RDS secrets need to be recreated for this. - The team is considering adding the DB information to the existing secret value, but if that doesn't work, they might have to create a new secret. - A new secret has been created for the proxy to communicate with the database. - There is a concern about managing two secrets and it would be better to have a single secret. - The existing secret only has 'db_user' and 'db_password' while the new secret has additional database information. - There is a need to modify the code or have one more secret to align the 'db_user'/'db_password' with the 'username'/'password' used in the code to connect to the database.",9,"['RDS Proxy configuration', 'Testing RDS Proxy', 'Creating new secret for RDS Proxy', 'Adding DB information to existing secret', 'Managing secrets for RDS Proxy']","['Not enabling RDS Proxy for actual database', 'Concerns about managing two secrets', 'Modifying code to align with secret']",UNKNOWN,[] 2023-06-08T15:46:45.265519,2023-06-08T21:14:47.771399,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,"- The person named rtaraniganty took a day off and missed the retro, which resulted in not hitting the highest per capita in the last 15 weeks. - The person had 30 points and if they had closed their previous sprint tasks before the retro, they would have achieved the highest per capita. - The person named aganivada mentions that the previous highest per capita was 160+ less than a month ago. - The discussion participants now have a new goal to beat the previous highest per capita.",2,['new goal to beat highest per capita'],"['person missed retro', 'not hitting highest per capita']",,[] 2023-04-19T05:47:01.968619,2023-04-19T06:42:59.350359,askumar,"{'rsrinivasan', 'askumar'}",2,"- The discussion is about designing a schema for storing event-based metrics in Snowflake. - The initial schema design includes basic info fields and a JSON field to store all the event info. - The events table has been populated with around 3k records for pulse events such as Pulse Sent, Button clicked, and Submitted. - Snowflake has good support for JSON, so the main table can be tightly coupled with the pulse-related event and other columns can be put inside the data part of it. - The team will modify the schema accordingly. - An alternate base table has been added to Snowflake and documented.",4,"['schema design', 'event-based metrics', 'Snowflake', 'JSON support', 'pulse events', 'schema modification', 'alternate base table']",[],,[] 2023-01-05T00:30:16.886919,2023-01-05T00:53:59.345729,akasim,"{'aganivada', 'sranjan', 'sfarooqh', 'pkarthikeyan', 'akasim'}",5,"- The merge from the hotfix branch to the develop and main branches was successful. - However, the pipeline build did not automatically kick in for both the develop and main branches. - The hotfix branch was not deleted from Git. - There is a warning in the screenshot regarding remote branch deletion, which may be the reason for the pipeline not triggering. - Multiple team members experienced the same issue on different branches. - GitLab was experiencing an outage, which may have caused the pipeline build issue. - Apart from the branch deletion and pipeline triggering, there were no other irrecoverable issues observed. - The team members were able to manually trigger the pipeline.",13,"['successful merge', 'manual pipeline triggering']","['pipeline build issue', 'branch deletion issue', 'GitLab outage']",NEUTRAL,[] 2023-06-20T00:28:38.740619,2023-06-20T01:19:13.171209,sranjan,"{'aganivada', 'sranjan'}",2,- Sranjan is requesting someone to bounce MB in stage. - Aganivada acknowledges the request and mentions that it will be done in another 5-10 minutes. - The bounce of MB in stage is being triggered.,2,['requesting someone to bounce MB in stage'],[],,[] 2023-02-16T08:35:28.500489,2023-02-16T10:23:28.482779,rtaraniganty,"{'rsrinivasan', 'rtaraniganty'}",2,"- There is an API available for searching on tenancy. - The API endpoint for searching on tenancy is . - The ""pin"" parameter in the API stands for Partial in operator, which takes an array of patterns. - The user ""rtaraniganty"" has implemented a script that uses the ""=ilike=<given string>"" format for searching, which serves the purpose for now.",8,"['API available for searching on tenancy', 'API endpoint for searching on tenancy', 'pin parameter in the API', ""user rtaraniganty's script for searching""]",[],,[] 2023-07-05T07:22:15.678079,2023-07-05T07:27:20.284079,aganivada,"{'aganivada', 'pjha'}",2,- Aganivada wants to pull the latest develon of CDK to the multi-instance branch and resolve conflicts. - Aganivada asks PJha to check for conflicts and resolve them when PJha gets a chance. - PJha agrees to check for conflicts and resolve them later.,7,"['pull latest develon of CDK', 'resolve conflicts']",[],,[] 2023-09-06T08:53:30.021479,2023-09-07T00:23:39.146849,rsrinivasan,"{'aganivada', 'rsrinivasan', 'mnirmal', 'hchintamreddy'}",4,"- rsrinivasan received three notifications: one for providing feedback for completing a survey, one for submitting a pulse, and one for a new pulse being created. - It is unclear if receiving all three notifications was intended or not. - mnirmal asked rsrinivasan if they used the ""remind me later"" functionality or if they were nudged from the UI. - rsrinivasan confirmed that they did not use the ""remind me later"" functionality and were not sure if they were nudged from the UI. - aganivada mentioned that they nudged users a few times from the dashboard. - mnirmal asked for confirmation of the instance ID and requested the result of a SQL query. - hchintamreddy provided the requested SQL query result, showing four records for rsrinivasan. - mnirmal confirmed that the count of records matched the number of messages in rsrinivasan's screenshot. - rsrinivasan questioned why all four nudges were updated instead of just the last/latest one or sending a new message indicating completion. - mnirmal mentioned that they couldn't find the thread where the decision to update all nudge messages was made. - mnirmal explained that the short span of time between the notifications was due to aganivada triggering the nudges at different times. - aganivada confirmed that the nudges were manually triggered. - mnirmal suggested updating the knowledge base if the approach of updating all nudge messages is being followed.",20,"['notifications', 'feedback', 'pulse', 'remind me later', 'nudge', 'UI', 'instance ID', 'SQL query', 'dashboard', 'knowledge base']","['unclear', 'count of records', 'updating all nudge messages', 'decision']",NEUTRAL,[] 2023-04-19T21:23:25.549509,2023-04-19T21:23:51.163559,rsrinivasan,"{'aganivada', 'rsrinivasan'}",2,"- rsrinivasan's son is unwell and needs to be taken to the hospital. - rsrinivasan will connect in the evening, indicating that he may be busy taking care of his son. - aganivada expresses concern and wishes rsrinivasan to take care.",2,[],[],POSITIVE,[] 2023-04-05T21:31:59.784839,2023-04-06T05:10:45.560209,aganivada,{'aganivada'},1,- Aganivada wants to sync up with ppant and rsrinivasan to run the SSO migration script in production. - They plan to do the setup and migration either today or split it between today and Monday. - The goal is to replicate a tenant similar to lvsands in terms of roles and permissions. - Aganivada informs rsrinivasan and ppant that the code has been updated to skip non-existing users. - Aganivada asks rsrinivasan and ppant to let them know when they are free to continue.,2,"['SSO migration', 'sync up', 'setup and migration', 'replicate tenant', 'code update', 'skip non-existing users']",['unknown'],POSITIVE,[] 2023-05-29T23:49:43.901229,2023-05-30T02:00:50.815289,nsrivastava,"{'aganivada', 'nsrivastava'}",2,"- nsrivastava is facing an issue while uploading the Slack-Integrator snapshot dependency. - The job mentions building the artifact `slack-integrator-0.0.5-SNAPSHOT`, but it is not present in the package registry. - The SC MR is failing due to the dependency not being found. - The snapshots seem to get published fine in the core services data repository. - The registry has version `0.0.17-COLL-2498-SNAPSHOT` that was just cut from develop. - The slack-integrator for this MR has version `0.0.5-COLL-2498-1-SNAPSHOT`. - aganivada suggests triggering the slack-integrator pipeline again as it may have been a temporary issue in GitLab. - nsrivastava triggers the pipeline again. - The issue is fixed by syncing dev with main, which resolves the missing commits.",6,"['issue resolved', 'pipeline triggered again', 'issue fixed']","['dependency not found', 'job failure', 'missing commits']",,"['COLL-2498', 'COLL-2498']" 2023-01-26T14:20:53.249369,2023-01-30T00:51:18.775899,anair,"{'aganivada', 'anair'}",2,"- The team is discussing the possibility of measuring metrics from start date to end date instead of using fixed time intervals. - Anair suggests surfacing pulse instance ID for easy lookup using swagger. - Anair proposes several actions and messaging statements for implementation, with priority given to ""between"" and ""in the last"" options. - Aganivada mentions that customer and vendor dimensions may not be available in Grafana and suggests checking with Mnirmal to add these dimensions. - Aganivada highlights the limitation of Logz in terms of data retention compared to Grafana. - Anair emphasizes the need to prioritize this discussion. - Aganivada plans to schedule a call with Mnirmal to discuss adding more attributes to support the metrics.",8,"['possibility of measuring metrics from start date to end date', 'surfacing pulse instance ID for easy lookup using swagger', 'proposing actions and messaging statements for implementation', 'checking with Mnirmal to add customer and vendor dimensions', 'highlighting the limitation of Logz in terms of data retention', 'prioritizing the discussion', 'scheduling a call with Mnirmal to discuss adding more attributes']",[],,[] 2023-04-11T12:42:16.026399,2023-04-11T22:36:20.008569,aganivada,"{'aganivada', 'anair', 'rsrinivasan'}",3,"- The team is discussing options for automatic membership in a group using Auth0's auto membership configuration. - Users can be provided access to the app and will automatically be added to the organization when they login. - However, they may see an error page if they are not assigned a role. - The advantage of this approach is that admins won't have to manually add users and send invitations via Axiamatic. - The team suggests combining this with a rule that assigns users automatically to any role based on configuration. - They also discuss the possibility of assigning all users as org admins by default for now, which can be easily done through Auth0. - The org admin role has been assigned to a specific user, and it is suggested to delete any invitations to avoid any issues.",8,"['automatic membership', ""Auth0's auto membership configuration"", 'users provided access', ""admins won't have to manually add users"", 'assigning users automatically to roles', 'assigning all users as org admins', 'deleting invitations']",['error page if not assigned a role'],POSITIVE,[] 2023-06-04T23:39:12.321359,2023-06-05T10:21:09.497289,rsrinivasan,"{'aganivada', 'pjha', 'rsrinivasan'}",3,"- rsrinivasan is experiencing a ""not enough space"" error on the Axiamatic GitLab runner. - The error is caused by a failure to get the Docker client for a specific image. - pjha suggests checking the available space on the server. - pjha confirms that there is 21% space used and 93G available. - aganivada suggests setting up an alert when capacity reaches 60%. - rsrinivasan asks if the space keeps increasing after every build and suggests deleting Docker images locally after each build. - pjha confirms that the earlier used space was 114G out of 117G. - pjha mentions that they have assigned a task to enhance monitoring for GitLab. - aganivada asks if unused containers will be cleaned up as part of the task. - rsrinivasan thanks pjha for the help and confirms that they are able to generate build and deploy. - pjha confirms that they will schedule a cron job to clean unused images and volumes.",12,"['available space on server', 'enhance monitoring for GitLab', 'clean unused images and volumes']","['not enough space error', 'failure to get Docker client', 'space keeps increasing after every build']",,[] 2023-06-21T00:57:10.646369,2023-06-22T17:00:33.815039,pjha,"{'aganivada', 'pjha', 'rtaraniganty'}",3,- pjha is asking for a review of the deployment process for a lambda function using tags. - aganivada has added some comments and pjha will review them. - They plan to demo the deployment process during SOS. - rtaraniganty suggests tagging feature branch versions with 'SNAPSHOT' and adding a pre-commit hook script to ensure version flow. - rtaraniganty also asks if the .gitlab-ci.yml file will be updated in all lambda repositories at once. - rtaraniganty suggests avoiding apt-get installs during build and moving them into the base image.,6,"['review of deployment process', 'demo during SOS', 'tagging feature branch versions', 'pre-commit hook script', 'updating .gitlab-ci.yml file', 'moving apt-get installs into base image']",[],,[] 2023-05-17T03:17:03.949609,2023-05-28T22:45:10.684489,askumar,"{'aganivada', 'rtaraniganty', 'askumar'}",3,"- The team is discussing a general migration plan document and the potential impact on services. - The liquibase tables need to be merged, and the team is exploring options for how to do this efficiently. - The estimated time for the entire INT migration to complete is two releases. - There is a suggestion to prepopulate the liquibase table and only copy the delta during migration to avoid each service having to copy the liquibase table data. - The team is considering using pgdump to update/insert only the missing records in the database when moving a new service. - It is mentioned that pgdump works by creating tables, populating data, and adding constraints, but it is unclear if it can copy just the delta. - There is a discussion about taking a backup and restoring the schema/table data to keep it in sync. - One team member had a personal exigency while on vacation, causing a delay in their response.",12,"['general migration plan document', 'exploring options for efficient merging of liquibase tables', 'suggestion to prepopulate liquibase table and copy delta during migration', 'considering using pgdump for updating/inserting missing records', 'discussion about taking backup and restoring schema/table data']",['delay in response due to personal exigency'],NEUTRAL,[] 2023-04-03T21:08:25.085279,2023-04-04T07:29:28.482589,aganivada,"{'aganivada', 'ppant'}",2,- Aganivada is planning to spend 2-3 days in the next sprint on refactoring tenancy. - Aganivada mentions that the code and formatting of the tenancy has become unmanageable over time. - Ppant offers to help Aganivada with the refactoring task. - Ppant mentions that they have finished coding the lambda and are currently working on development.,3,"['refactoring', 'help']","['unmanageable', 'code', 'formatting']",,[] 2023-05-08T23:39:46.558849,2023-05-09T08:07:14.386089,ppant,"{'aganivada', 'ppant'}",2,"- Python's requests library rolled out a new version 2.30.0 on 3rd May. - The new version made the previously used `boto3=1.26.93` incompatible with `urllib3 < 1.26.5`. - The version of requests that works with all the dependencies is `2.28.2`. - The Poetry dependencies include various packages such as bandit, botocore, black, coverage, flake8, ipython, isort, jinja2-cli, mkdocs, mkdocstrings, mkdocs-material, safety, pylint, pytest, pytest-cov, pytest-sugar, pytest-xdist, and urllib3.",2,"[""Python's requests library version 2.30.0"", 'Poetry dependencies']",['Incompatibility between requests and boto3'],,[] 2023-06-01T21:43:46.535339,2023-06-01T22:05:28.275509,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,- The code has been checked in after making some modifications. - There is a need to find a way to allow the SELECT operation to work. - The code can be found at the specified GitLab repository.,2,"['code modifications', 'GitLab repository']",['SELECT operation'],,[] 2023-05-24T09:13:52.532149,2023-05-24T09:31:49.257739,bganganna,"{'bganganna', 'mnirmal'}",2,"- There was a discussion about improvements to the pulse card. - The term ""Complete Pulse"" was mentioned as part of the specification. - There was a mention of a back button for a Slack survey. - It was acknowledged that there may be a previous button in Teams, but it may not be available in Slack.",4,"['improvements to pulse card', 'Complete Pulse']","['back button for Slack survey', 'previous button in Teams']",,[] 2023-03-07T19:40:33.928209,2023-03-09T12:49:26.085139,rtaraniganty,"{'aganivada', 'svummidi', 'rsrinivasan', 'rtaraniganty'}",4,"- There is an issue with accessing a new library on GitLab Maven. - The builds pass fine locally, but fail on GitLab with an access forbidden error. - The issue seems to be with authorization and not just the cache. - The authorization model for pulling libraries on GitLab is dynamic token-based. - The issue is specific to the new library, as other libraries can be pulled successfully. - Clearing the runner cache and using the -U command did not resolve the issue. - The error message indicates a 403 Forbidden status. - There is a discussion about whether the issue is with the local build or GitLab. - GitLab support has been contacted for assistance. - GitLab recently released a new feature that requires listing all projects allowed to access the artifacts of the current project. - This feature is causing issues for new libraries, but not existing ones. - There is confusion about the reasoning behind this feature and its impact on security. - There is a concern about the security implications of disabling the feature. - The support team is being asked to clarify the security concerns and reasoning behind the feature. - The support team initially mentioned that the feature is to restrict access within the organization. - The support team is being asked to provide a clearer explanation of the feature and its implications.",60,"['Builds passing locally', 'Other libraries can be pulled successfully', 'Contacted GitLab support', 'New feature released by GitLab', 'Restricting access within the organization']","['Issue with accessing new library on GitLab Maven', 'Builds failing on GitLab', 'Authorization issue', '403 Forbidden error', 'Confusion about feature reasoning and impact on security', 'Concerns about disabling the feature', 'Lack of clarity from support team']",NEGATIVE,"['PLAT-1620', 'PLAT-1620', 'PLAT-1620']" 2023-08-31T03:19:55.981009,2023-08-31T21:02:07.781989,aganivada,"{'aganivada', 'pmangalapuri', 'rtaraniganty', 'amishra'}",4,"- The team is planning to make changes to the UI stack deployment. - They need to add a new S3 bucket and link it to the CloudFront stack. - They are discussing the domain name to be used for the new S3 bucket. - They need to make changes to the CDK CloudFront to add additional options. - They are considering generating tar files instead of images for the CICD model. - They are evaluating the existing Gitlab - ECR account model for deployment. - They want to hydrate env.js during deployment before uploading it to the S3 bucket. - They need to update the UI CDK deployment code to support both models. - They want to add a command to invalidate CloudFront cache after publishing content to the S3 bucket. - They plan to add CloudWatch alarms for error responses from CloudFront logs. - They are considering adding checksum validation at the end of deployment. - They need to evaluate the process to cleanup stale files in the S3 bucket. - The main driver for moving away from the container model is to speed up the UI and serve directly from CloudFront. - There is a demo available on the advantages of hosting using CloudFront over hosting in ECS. - The team is targeting sprint ""PLAT Sprint 110 - 0.9.15 #4"" for these changes. - The change is driven by tech debt and benefits in user experience, not by product prioritization.",11,"['UI stack deployment changes', 'Adding new S3 bucket', 'Linking S3 bucket to CloudFront stack', 'Adding additional options to CDK CloudFront', 'Hydrating env.js during deployment', 'Updating UI CDK deployment code', 'Adding command to invalidate CloudFront cache', 'Adding CloudWatch alarms for error responses', 'Demo on advantages of hosting using CloudFront', 'Targeting sprint PLAT Sprint 110 - 0.9.15 #4']","['Considering generating tar files instead of images for CICD model', 'Evaluating Gitlab - ECR account model for deployment', 'Adding checksum validation at the end of deployment', 'Evaluating process to cleanup stale files in S3 bucket', 'Moving away from container model']",POSITIVE,"['PLAT-2488', 'PLAT-2488']" 2023-03-28T15:14:25.863659,2023-03-29T21:08:17.152839,svummidi,"{'anair', 'rsrinivasan', 'avenkataraman', 'svummidi', 'rtaraniganty'}",5,"- There is a concern about including topics from the question in the topic analysis. - The current approach includes topics from both the question and the answer, but it is impacting the accuracy of the topics. - There is a suggestion to do GPT prompt engineering to establish context and exclude topics from the question. - The cost for each call to OpenAI is estimated to be $1.50 for 1K comments. - It is suggested to whitelist certain tenants to avoid wasting money on unnecessary calls. - Removing terms from the question in the final topic list may result in the exclusion of key terms from the answers. - Topics are typically picked from questions when the answer is empty or irrelevant, but the prompt will be modified to only pick topics from the answer. - The prompt has been fixed and the topics are now looking good. - The retro tenant has been checked for any odd sentiment detection or topics. - There is a discussion about summarization approaches for the retro. - The new prompt for gpt3.5 can be found in the provided link.",19,"['GPT prompt engineering', 'Whitelisting tenants', 'Fixing prompt', 'Checking retro tenant', 'Summarization approaches', 'New prompt for gpt3.5']","['Including topics from question', 'Impacting accuracy', 'Cost of OpenAI calls', 'Exclusion of key terms', 'Odd sentiment detection']",UNKNOWN,['DAT-939'] 2023-05-12T03:48:34.387719,2023-05-12T12:58:21.101659,mnirmal,"{'aganivada', 'pjha', 'rtaraniganty', 'mnirmal'}",4,- mnirmal is looking for the deployment dependencies page for version 0.9.11. - aganivada will verify if the page exists. - pjha will check if the deployment dependency page has been added for version 0.9.11 and will use the latest deployment dependency template. - pjha shares the link to the deployment dependencies page for version 0.9.11 with mnirmal and cc's aganivada. - rtaraniganty expresses gratitude to pjha.,5,"['deployment dependencies page', 'version 0.9.11']",[],,[] 2023-07-13T07:51:28.626099,2023-07-13T08:30:44.749569,aganivada,"{'aganivada', 'askumar'}",2,"- Aganivada asked Askumar if they can shutdown commons RDS in stage. - Askumar agreed to remove it. - Askumar confirmed with Aganivada if the deletion of the instance would be fine, mentioning that they have backed up the data and there are no more connections to it. - Aganivada confirmed that it would be fine. - Askumar thanked Aganivada and stated that they have removed the commons RDS instances.",6,"['shutdown commons RDS in stage', 'remove commons RDS instances']",[],,[] 2023-08-30T00:41:56.844789,2023-08-30T08:25:30.596299,psomasamudram563,"{'aganivada', 'psomasamudram563'}",2,"- The user ""psomasamudram563"" is requesting to shut down some services in the AWS Core Account. - The user ""aganivada"" agrees to the request. - ""aganivada"" informs a team or subteam named ""@plat"" and ""@data"" about deleting all the zones added to the ""devx"" account. - ""aganivada"" confirms that all the hosted zones have been deleted and the bastion host has been shut down. - ""psomasamudram563"" expresses gratitude for the actions taken.",5,"['shut down services', 'delete zones', 'delete hosted zones', 'shut down bastion host']",[],,[] 2023-07-28T10:20:03.511329,2023-07-30T20:22:05.363959,rtaraniganty,"{'aganivada', 'askumar', 'rtaraniganty', 'rvaidya'}",4,"- There is a discussion about the usage of the epoch service and whether anyone is making calls to it. - It is mentioned that the only invocation of the epoch service is from the tenant deletion workflow in logs. - There are no new instances of epoch workflows being created or used. - It is planned to take down the tagging app in the 0.9.14 deployment, as there was no formal deployment of 0.9.13 to production. - The decision to retire the tagging app is mentioned to be communicated to sbhosale and gshenoy.",4,"['usage of epoch service', 'invocation of epoch service from tenant deletion workflow', 'retiring the tagging app']","['no new instances of epoch workflows', 'no formal deployment of 0.9.13 to production']",NEUTRAL,[] 2022-03-09T16:51:14.414469,2023-01-04T20:56:14.180639,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,"- The authorizer attached to the API Gateway does not currently understand S2S tokens. - Swagger and health URLs are whitelisted and do not require a token check. - There is a plan to update the custom authorizer to allow calls from an associated VPN. - For system tests made from GitLab, health calls can be tagged and excluded from the check. - There is a suggestion to have a pass-through authorizer for non-apps that only allows calls from the VPN. - The team is considering implementing an IP-based authorizer lambda to allow or deny based on IP range from the VPN.",11,"['Updating custom authorizer to allow calls from associated VPN', 'Implementing IP-based authorizer lambda']","['Authorizer not understanding S2S tokens', 'Swagger and health URLs whitelisted and not requiring token check', 'Excluding health calls from token check for system tests', 'Suggestion for pass-through authorizer for non-apps', 'Considering implementing IP-based authorizer lambda']",,[] 2023-05-31T20:38:09.881599,2023-05-31T23:13:23.682969,aganivada,"{'aganivada', 'ppant'}",2,- ppant and pjha have achieved something worth celebrating - aganivada congratulates ppant and pjha - ppant suggests that aganivada should come to Bangalore to give them a party - aganivada expresses willingness to come to Bangalore for a party,3,"['achievement', 'celebration', 'congratulations', 'party']",[],,[] 2023-08-24T08:30:34.145749,2023-08-24T10:10:43.175859,rtaraniganty,"{'aganivada', 'rtaraniganty'}",2,"- The discussion is about enabling SSO (Single Sign-On) for users in the 'Unbound' system using Auth0. - One user had both username-password and SAML enabled, causing an issue. - A list of users who need to be re-invited to join via SAML is provided. - The issue occurred because a flag was not turned on, which forces all new invitations to go via SAML. - The flag was enabled in the production environment after testing in the staging environment. - There is a suggestion to update the instructions for enabling SSO to include information about the flag. - There is a question about why a certain flag was marked as not required, and it is suggested to remove it. - The updated instructions for checking the flag's status are shared for review. - The reason for marking the flag as not required may be due to targeting being turned on after creating the ticket. - There is a suggestion to tighten the process of handling non-permanent flags with targeting.",14,"['enabling SSO', 'Auth0', 'flag enabled in production', 'updating instructions', 'reviewing flag status', 'tightening process']","['issue with username-password and SAML', 're-inviting users', 'flag not turned on', 'flag marked as not required']",NEUTRAL,['PLAT-1760'] 2023-08-07T18:58:24.371619,2023-08-07T20:26:02.234169,aganivada,"{'aganivada', 'askumar'}",2,"- The discussion is about a specific link or issue related to a database migration. - The issue mentioned in the link occurred after all the migrations were completed in the stage environment. - The conversation involves at least three participants: aganivada, askumar, and rtaraniganty.",2,[],[],UNKNOWN,[] 2023-01-30T01:15:32.783079,2023-01-30T03:07:35.687589,aganivada,"{'aganivada', 'bganganna'}",2,"- There is an issue with a specific task (PLAT-1636) that needs to be addressed. - The issue was not covered in Figma, a design tool. - There is another behavior noticed where if a product is removed by a product admin, it is also deleted from the tenant for orgadmin. - There is a question about whether the product should be deleted from the tenant or just removed from the ProductAdmin role. - The expected behavior is that once a product admin is assigned, they have all permissions as the orgadmin on that specific product.",4,"['specific task issue (PLAT-1636)', 'expected behavior']","['issue not covered in Figma', 'behavior of product removal']",,['PLAT-1636'] 2023-02-19T23:50:26.013889,2023-02-20T20:46:53.954669,rvaidya,"{'aganivada', 'apatole', 'mli', 'rvaidya', 'avenkataraman', 'rtaraniganty'}",6,"- The discussion is about whether to split the customer table into two tables: one for connected customers and another for invited customers. - The main concern is regarding pagination and how to support the two sets of information coming from different sources. - One suggestion is to separate open invitations from actual customers for better UX and easier pagination. - Splitting into two tabs is seen as a solution to avoid API implementation issues and improve usability. - There is a question about whether this problem applies only to the Customers list on the Vendor Dashboard or if it is relevant to other surfaces as well. - The idea of having a default sort where invite pending customers are listed at the end is discussed, but it may not be a suitable solution due to potential sorting and visibility issues. - Using tabs instead of sorting and filtering is favored because it can distinctly display categories, resolve technical issues, and allow for adjusting default columns. - A design proposal for using tabs in list views is shared for further review.",11,"['Splitting customer table', 'Separating open invitations', 'Using tabs in list views']","['Pagination', 'API implementation issues', 'Sorting and visibility issues']",UNKNOWN,[] 2023-02-09T21:22:37.919879,2023-02-13T09:23:30.310499,rtaraniganty,"{'aganivada', 'askumar', 'rtaraniganty'}",3,"- The team needs to inform Grafana that they cannot do the mapping for individual ids and would prefer to periodically dump the mapping into Grafana's store and use that in transformations. - Grafana responded that such mapping is unavailable when using transformations and suggested that the mapping needs to happen on a datasource level. - The team is using Prometheus to scrape metrics and has asked for steps to do the mapping for Prometheus. - The mappings required are tenantid to tenant name, user id to email/user name, pvtid to product name, and product instance to product name. - The team is considering using a name cache in Redis that gets written to whenever an entity gets created or deleted, and periodically to avoid misses. This cache would be consulted to translate ids to names before the metrics are published. - The team is also exploring options available from Grafana/Prometheus to automatically link the mappings, similar to a foreign key in a database. - There are discussions and references to using SQL data to label time series in Grafana, but it is unclear if this would work for their specific use case.",14,"['Informing Grafana about inability to do individual id mapping', ""Periodically dumping mapping into Grafana's store"", 'Using Prometheus to scrape metrics', 'Using a name cache in Redis', 'Exploring options for automatic mapping in Grafana/Prometheus']","['Mapping unavailable when using transformations in Grafana', 'Uncertainty about using SQL data to label time series in Grafana']",NEUTRAL,[] 2023-03-23T04:28:18.093899,2023-03-25T09:02:13.653789,aganivada,"{'aganivada', 'bganganna', 'rtaraniganty'}",3,"- The sanity test suite is taking more than 1 hour in the stage, causing job timeouts. - The reason for the increased time is the addition of delete product and publishers in the tear down, which is taking time due to the large number of records that have not been deleted for some tenants. - It is suggested to delete the data from the database to improve the performance. - Deleting the existing records will reduce the number of records that need to be cleaned up as part of the test. - It is proposed to manually clean up the records at a time when there isn't much traffic instead of relying on the test suite to do it. - The cleanup process has been initiated in the stage environment and a stage automation test has been triggered. - The cleanup job pipeline took around 35 minutes to clean up around 1k product records and 1k publisher records in the stage environment. - The execution time of the test setup and run is also adding to the overall execution time. - There are unnecessary token calls being made that can be optimized to reduce the time taken for operations. - A merge request has been created to address the optimization of token calls. - A backup of the core prod has been taken before triggering the cleanup process. - The cleanup process will be done in batches, starting with 100 records and then 1000 records in 4 batches. - The progress of the cleanup process can be monitored through the provided links. - The number of records to be scanned per iteration has been adjusted to reduce the count.",23,"['improved performance', 'manually clean up records', 'cleanup process initiated', 'cleanup job pipeline', 'optimization of token calls', 'backup of core prod taken', 'cleanup process in batches', 'adjustment of record count']","['sanity test suite taking more than 1 hour', 'job timeouts', 'increased time due to delete product and publishers', 'large number of records not deleted', 'unnecessary token calls']",NEGATIVE,[] 2023-08-17T09:16:13.889709,2023-08-17T09:19:54.303319,aganivada,"{'aganivada', 'bkathi', 'rsrinivasan'}",3,"- The demo was considered very cool and the work was praised. - The person named bkathi received compliments for their work. - bkathi apologized for missing the initial slot for the platform. - aganivada mentioned that they didn't find bkathi's name in the teams invitee's list, so they thought they missed inviting them.",5,"['demo', 'work', 'bkathi']",[],,[] 2023-06-05T04:47:41.110669,2023-06-05T08:45:49.596049,snangia,"{'aganivada', 'pjha', 'rtaraniganty', 'snangia'}",4,"- The Gitlab pipeline in pulse-manager has started failing recently with a 401 Unauthorized error. - Clearing the runner cache did not help resolve the issue. - The issue seems to be related to the authentication of the parent POM artifact. - The issue has been raised with Gitlab support, but they keep closing the ticket stating that the user record is not associated with any organization record. - The issue is not specific to the shared-runner, as the same issue occurs when running on the GL runner. - The issue is not related to a recent upgrade of the parent POM. - The SC builds are working fine with the same parent POM version. - A support ticket has been raised with Gitlab to investigate the unauthorized error. - The team is trying to figure out how to add people to the Gitlab account to submit support tickets. - The team is considering creating a project to manage support contacts. - The parent POM versions are the same for both SC and PM.",30,"['Gitlab pipeline', 'pulse-manager', 'authentication', 'support ticket', 'support contacts', 'parent POM']","['401 Unauthorized error', 'closing ticket', 'unauthorized error']",NEGATIVE,['COLL-2520'] 2023-06-26T08:37:56.824139,2023-06-26T09:02:42.715909,ppant,"{'rtaraniganty', 'ppant'}",2,"- The discussion is about updating the configs in SSM for multi-instance. - The proposed update will make the backend code more structured. - The multi-instance configs include services like tenancy-service, core-data-service, and onboarding-app-service. - The versions of these services are mentioned as plat-1 and plat-2. - It is suggested to have a script to manage the update instead of manual updates. - The plan is to implement the update via a script.",3,"['updating configs in SSM', 'making backend code more structured', 'implementing update via script']",['manual updates'],,[] 2023-02-28T00:36:10.171269,2023-02-28T02:59:17.149719,aganivada,{'aganivada'},1,- Aganivada had to step out for some personal work and would be away for the next 2 hours. - Aganivada informed the team by mentioning the subteam @plat. - Aganivada later returned and mentioned being back.,2,[],[],,[] 2023-04-11T23:29:21.862829,2023-04-11T23:34:51.262459,araman,"{'aganivada', 'araman'}",2,"- Araman will not be able to attend today's sprint retro. - The sprint will be closed before the retro and new tasks will be added to the backlog. - The major focus for systest for the next 2 days will be 0.9.9 HF on 4/14. - 0.9.10 items will be picked up for test planning in the next sprint, with priority given to the ones already available on int for testing. - Bganganna and Vtiwari are cc'd in the conversation.",2,[],[],,[] 2023-06-15T21:40:20.610789,2023-09-04T22:51:43.690779,svummidi,"{'aganivada', 'svummidi', 'rsrinivasan', 'bot_message'}",4,"- The team is discussing how to send Imply Polaris metrics to Lightstep. - They are currently using Amazon Open Telemetry Distro to publish metrics to Lightstep. - They are considering adding a standalone instance of the distro container to scrape metrics and publish them to Lightstep. - They discuss the possibility of adding the metrics to the existing stack or creating a separate stack. - They mention the need to filter metrics to avoid running out of space in Lightstep. - They discuss the metrics they are currently getting from Polaris and the possibility of linking them to the metrics of the score provider in Lightstep. - They discuss the need to prefix the metrics with ""polaris_"" to avoid confusion. - They discuss the scrape interval for the metrics, starting with 60 seconds. - They mention using the same cluster for all three environments and appending an environment flag to the metrics. - They discuss the need to create separate dashboards for each environment and mention the metrics they want to add to the dashboard. - They mention the need to create alerts on certain metrics. - They discuss the metrics they are currently getting from the Polaris endpoint and mention the full list of metrics available in Polaris. - They discuss the possibility of enabling ingestion metrics in Polaris and mention the metrics they are currently publishing. - They mention the need to customize the dashboard to show only metrics from specific environments. - They conclude that the metrics from Polaris look good and discuss the next steps.",29,"['sending Imply Polaris metrics to Lightstep', 'using Amazon Open Telemetry Distro', 'adding standalone instance of distro container', 'filtering metrics to avoid running out of space', 'linking Polaris metrics to score provider metrics', ""prefixing metrics with 'polaris_'"", 'scrape interval for metrics', 'using same cluster for all environments', 'creating separate dashboards for each environment', 'adding metrics to dashboard', 'creating alerts on certain metrics', 'enabling ingestion metrics in Polaris', 'customizing dashboard to show specific environments', 'metrics from Polaris look good', 'next steps']",[],,"['PLAT-2478', 'PLAT-2518']" 2023-05-09T11:36:13.837949,2023-05-10T11:49:34.757189,hchintamreddy,"{'aganivada', 'svummidi', 'rtaraniganty', 'hchintamreddy'}",4,"- hchintamreddy wants to add a feature to com.axm.platform.commons.config.WebClientConfig#s2sWebClient to identify the service the call originated from. - The current default value for the User-Agent header is ""ReactorNetty/1.0.22"". - svummidi suggests prefixing the User-Agent value with ""InternalService-"" for filtering purposes. - hchintamreddy agrees to add the prefix as suggested. - rtaraniganty suggests using a shorter prefix like ""axm-"" instead. - hchintamreddy agrees to use ""axm-"" as the prefix.",6,['Add feature to identify service origin'],[],,[] 2023-07-06T14:20:14.314289,2023-07-06T23:42:07.222339,svummidi,"{'aganivada', 'svummidi', 'rsrinivasan'}",3,"- There is an issue with logs showing different formats, some with ""message"" and some with ""message_obj.msg"". - The format of the logs depends on the component that is logging the statement. - The logs with ""message"" are mostly spring default ones, while the logs with ""message_obj.msg"" are typically custom logs. - The logs are transformed into ""message_obj"" format when they contain JSON payload. - Initially, there was an issue with logz not parsing JSON payload into individual records, so a pipeline was added to transform the object where required.",3,"[""Logs with 'message' format"", 'Spring default logs', 'Pipeline for transforming logs']","[""Logs with 'message_obj.msg' format"", 'Issue with logz parsing JSON payload']",,[] 2023-04-11T20:03:25.858639,2023-04-12T10:45:34.606609,rtaraniganty,"{'aganivada', 'askumar', 'rtaraniganty'}",3,"- The team is discussing options for building a product metrics dashboard. - One option is to use Retool to surface the data, although the charting capabilities may be limited. - Another option being explored is pushing the metrics to Snowflake using an event-based model. - Snowflake can handle JSON data well, so multiple events can potentially be stored in the same table using a type differentiator. - The team has created a sample test schema with three tables for the demo. - It is advised not to plan on too many joins, as Snowflake performs well with compression. - The team is considering creating a ""union"" schema to avoid creating a table per event type. - There may be cases where joins are necessary, such as mapping tenant and user IDs to names. - It is suggested to leave user and tenant data as IDs in Snowflake tables and see how things work in that mode.",15,"['options for building a product metrics dashboard', 'using Retool to surface the data', 'pushing the metrics to Snowflake using an event-based model', 'Snowflake can handle JSON data well', 'creating a sample test schema with three tables', 'Snowflake performs well with compression', ""creating a 'union' schema"", 'mapping tenant and user IDs to names', 'leaving user and tenant data as IDs in Snowflake tables']","['charting capabilities of Retool', 'planning on too many joins']",,[] 2023-05-15T22:34:56.814089,2023-05-15T22:54:50.950119,aganivada,{'aganivada'},1,- There was an issue with the report-generator lambda. - The issue has been resolved. - The person who raised the issue was not aware that the report generation private DNS changes were not merged to develop yet. - The report-generator was deployed from develop for testing CDKv2.,2,"['issue resolved', 'report-generator lambda']","['issue with report-generator lambda', 'report generation private DNS changes not merged']",,[] 2023-03-14T12:18:59.394609,2023-03-14T12:53:58.085939,mnirmal,"{'rtaraniganty', 'mnirmal'}",2,"- There is an issue with the 0.0.14 version of com.axm.commons. - The issue is causing a NullPointerException and InvocationTargetException. - The issue is occurring when running the service locally. - The test runs fine despite the errors. - Making a small change in test/resources/log4j2.xml fixes the issue temporarily. - There is a need to fix the code in observability to address the issue permanently. - The code in `public static TenantsLogConfig convert(String jsonString)` needs a null check at the top. - The issue is related to the `tenants` attribute, which is sometimes null instead of an empty string.",11,"['test runs fine', 'small change in log4j2.xml fixes the issue temporarily', 'null check needed in convert() method', 'issue related to tenants attribute']","['issue with version 0.0.14 of com.axm.commons', 'NullPointerException', 'InvocationTargetException', 'issue occurring when running service locally', 'code in observability needs to be fixed', 'tenants attribute sometimes null instead of empty string']",NEGATIVE,[] 2023-04-12T17:41:50.959149,2023-04-12T22:51:50.119929,rtaraniganty,"{'aganivada', 'rsrinivasan', 'akasim', 'svummidi', 'rtaraniganty', 'ppant'}",6,"- The team has a requirement to build product metrics dashboards using data from various sources such as Salesforce, Axm tables, audit-log ES store, and event data in Snowflake. - The team is considering different options for building the middleware/backend APIs to drive these dashboards. - Option #2, building an admin service in Python, seems to be favored because it is not as heavy as Java and allows for flexible and fast APIs. - There is a suggestion to consider using GraphQL if there is a need to consolidate data from multiple sources. - There is a discussion about whether to replicate existing endpoints in Python or have Retool connect directly to the backend via s2s. - The team discusses how to handle data from third-party sources like Salesforce, either by connecting Retool directly to Salesforce or adding code to the middleware service to talk to Salesforce. - There is a mention of using JDBC to connect directly to data sources, but it is noted that this may not be sufficient for complex data requirements and may not be ideal from a security perspective. - The team acknowledges the need for some middleware to handle data stitching across multiple services. - There is a suggestion to use Java 21 with light threads instead of Python, but it is not clear if this suggestion is serious or not.",15,"['Building product metrics dashboards', 'Flexible and fast APIs', 'Consideration of GraphQL', 'Handling data from third-party sources', 'Middleware for data stitching']","['Replicating existing endpoints in Python', 'Using JDBC for complex data requirements', 'Uncertainty about using Java 21 with light threads']",UNKNOWN,[] 2023-08-30T23:41:44.836679,2023-08-31T10:03:51.709219,akasim,"{'akasim', 'pjha', 'rtaraniganty'}",3,- The Gitlab pipeline is stuck in a pending state. - The pipeline was triggered 15 minutes ago and is still pending. - The team needs to rollout the HF (Hotfix) today. - Akasim is seeking help to resolve the issue. - Pjha is investigating the root cause of the problem. - Pjha confirms that the pipeline has started but is still looking into the root cause. - Rtaraniganty asks Pjha to document the fix under HowTO.,7,['Hotfix rollout'],"['Gitlab pipeline stuck', 'Pending state']",NEGATIVE,[] 2023-05-31T03:02:30.914489,2023-06-06T03:29:00.266179,ppant,"{'aganivada', 'rtaraniganty', 'ppant'}",3,- ppant has updated the document for backend changes for multi instances and is seeking feedback. - rtaraniganty suggests calling a review meeting with the overall team to walk them through the changes. - rtaraniganty raises concerns about the rewiring of clients and the potential need for widespread changes. - ppant confirms that core-data only creates a common APIClient and there should not be a major change required. - aganivada suggests scheduling a review meeting for Friday morning IST or Monday evening IST. - ppant agrees to schedule the review meeting early next week due to the SSO prep call.,15,"['document update', 'feedback', 'review meeting', 'common APIClient']","['rewiring of clients', 'widespread changes', 'SSO prep call']",,[] 2023-02-27T19:55:36.414679,2023-02-28T07:56:21.837419,aganivada,"{'aganivada', 'pjha', 'rtaraniganty', 'snangia'}",4,"- The value of `${env_role_arn}` in the `run-robot-cicd.sh` script of system tests is determined by the `dyn_var` function. - The `dyn_var` function checks for a variable named `AGENT_ROLE_ARN_$AXM_ENV` and if it exists, assigns its value to `env_role_arn`. - The value of `AXM_ENV` is converted to uppercase and stored in `AXM_ENV_LOWER`. - The actual values for `AGENT_ROLE_ARN` come from GitLab variables. - The role `arn:aws:iam::433798924509:role/Gitlab-Agent-Role` only works from the system test repo and not any other repo. - There is a repo-level filtering added on the AWS side for the roles. - There may be an additional environment variable being passed to the system-test job that affects the role. - The documentation on configuring a conditional role with OIDC claims in GitLab's cloud services may provide further insights. - The discussion participants are aware of the issue and are working on resolving it.",12,"['dyn_var function', 'AXM_ENV', 'AGENT_ROLE_ARN', 'GitLab variables', 'repo-level filtering', 'additional environment variable', 'conditional role with OIDC claims', 'discussion participants']",[],POSITIVE,[] 2023-07-28T14:53:33.247069,2023-07-30T21:58:06.023039,rtaraniganty,"{'akasim', 'askumar', 'aganivada', 'rtaraniganty'}",4,"- Akasim inadvertently reverted the changes made by multiple people in the cdk-artifacts. - Svummidi found that the askai changes were gone and discovered that a commit undid a bunch of files. - Rtaraniganty undid Akasim's undo by using ""git revert"" and resolved a conflict in the core/cdk.json file. - The team needs to ensure that the merges made between July 26 and now are available or if they need to replay each merge again. - The challenge seems to be with branches created after July 26 and merged back. - Aganivada and Askumar confirmed that their changes are present in the develop branch. - Akasim apologized for the mistake and thanked the team for fixing it. - Rtaraniganty mentioned that Satya lost more time than anyone else.",5,"['changes fixed', 'conflict resolved', 'changes present']","['reverted changes', 'undid files', 'lost time']",,[] 2023-08-23T16:43:45.323139,2023-08-23T20:16:26.103219,rtaraniganty,"{'askumar', 'rsrinivasan', 'rtaraniganty'}",3,- rtaraniganty is using VS Code for Python code formatting. - They have installed Black and Flake8 extensions for code formatting. - They have specific settings in VS Code for Python code formatting. - rsrinivasan suggests setting up pre-commit hooks in PyCharm for automatic formatting before committing. - askumar expresses gratitude to rtaraniganty.,4,"['VS Code', 'Python code formatting', 'Black', 'Flake8', 'specific settings', 'pre-commit hooks', 'PyCharm', 'automatic formatting', 'gratitude']",[],,[] 2023-08-01T09:03:47.239689,2023-08-03T23:44:42.327989,pjha,"{'pjha', 'rtaraniganty', 'askumar'}",3,- There is a need for reviewing the process of temporary bastion access. - There is no time-based clean up for the uploaded keys. - Keys need to be manually removed from the S3 bucket. - There is a suggestion to implement a lambda function for automatic clean up of keys. - The suggestion is considered doable and the person will work on it.,6,"['reviewing process of temporary bastion access', 'implementing lambda function for automatic clean up of keys', 'suggestion considered doable']","['no time-based clean up for uploaded keys', 'keys need to be manually removed from S3 bucket']",,[] 2023-01-10T03:02:09.785699,2023-01-10T05:55:09.828369,araman,"{'sfarooqh', 'rsrinivasan', 'bganganna', 'araman', 'askumar'}",5,"- There is an issue with rule parsing in a test tenant in the production environment. - The issue may be related to a recent update in the production environment for reports. - The lambda timeout issue is unrelated to the rule parsing error. - A trace_id has been shared for further investigation. - A tracker has been created for this issue. - There was no manual editing of user metadata for this user in auth0. - The bug in reporting is related to permissions in the wrong format. - Permissions in the wrong format may have been created during the process of creating permissions in reporting to fetch a token from the vault-service. - The feature itself does not fail and the report generation is successful, so the logs do not show any errors. - The reporting service/lambda functions are written in Node.js.",15,"['rule parsing', 'recent update', 'trace_id', 'tracker', 'report generation', 'Node.js']","['lambda timeout', 'bug in reporting', 'permissions in wrong format', 'user metadata editing']",NEGATIVE,"['DISC-2137', 'DISC-2137', 'DISC-2137']" 2023-04-17T21:00:29.403909,2023-04-24T07:17:12.935869,aganivada,"{'aganivada', 'svummidi', 'akalyta'}",3,"- aganivada is checking if they have access to a notification from OpenAI. - aganivada is asking rsrinivasan and akalyta for an estimate on how much quota they should request to increase. - aganivada has submitted a request to increase the quota to $200 and cc'd rsrinivasan. - akalyta thanks aganivada for submitting the request. - aganivada sets a hard limit of $150 to be on the safe side, with a 50 buffer in case they exceed the threshold. - svummidi thanks aganivada for finding the credentials in 1Password using the operations account.",9,"['notification access', 'quota increase request', 'submitting request', 'finding credentials']","['unknown notification access', 'unclear quota increase estimate', 'exceeding threshold']",,[] 2023-04-03T03:15:47.597929,2023-04-18T02:46:32.137459,rsrinivasan,"{'aganivada', 'rsrinivasan'}",2,"- There is a discussion about modifying an API client called ""moustache"" in the OpenAPI Generator project. - The discussion is happening on a pull request with the number 6036. - The pull request contains some code changes that can be viewed on the provided GitHub link. - The participants in the discussion are rsrinivasan, aganivada, pjha, rvaidya, and ppant.",2,"['API client modification', 'OpenAPI Generator project']",['None'],UNKNOWN,[] 2023-01-17T09:04:53.086339,2023-01-17T20:20:51.321179,aganivada,"{'aganivada', 'askumar'}",2,- There was a change made in the int branch of the platform-libs-commons repository. - The change caused an exception related to the put operation not being supported on readonlyhttpheaders. - The exception was observed during the deployment of tenancy with the latest commons. - The stack trace indicates that the error occurred in the ReadOnlyHttpHeaders class and the WebClientConfig class. - The change in WebClientConfig was made to address the issue. - The change was not verified in Int and the runtime error was not caught during local testing. - Aganivada asked Askumar to review the changes made in WebClientConfig. - Askumar confirmed that he reviewed the changes for client config. - Aganivada acknowledged that the change was not verified in Int.,5,"['change made in int branch', 'change in WebClientConfig']","['exception', 'error during deployment', 'error in ReadOnlyHttpHeaders', 'error in WebClientConfig', 'change not verified in Int']",,[] 2023-08-27T21:51:04.002789,2023-08-27T21:51:36.927019,aganivada,"{'aganivada', 'askumar'}",2,- Aganivada is requesting Askumar not to close the tenancy HF (Hotfix) yet. - Aganivada has a couple of fixes that need to be implemented. - Aganivada suggests deploying tenancy 20.1 to the stage for testing purposes. - Askumar agrees to Aganivada's request.,2,"['request to not close tenancy HF', 'fixes to be implemented', 'deploy tenancy 20.1 to stage', 'Askumar agrees']",[],,[] 2023-08-21T01:14:23.262809,2023-08-22T14:32:55.569299,mnirmal,"{'askumar', 'rtaraniganty', 'mnirmal'}",3,"- The build is failing in PMS because of a missing tenant-deletion 0.0.1 jar. - The plugin code has been commented out temporarily to unblock the build. - Askumar suggests trying to build with the latest release version 0.0.2-SNAPSHOT, but it doesn't work. - Rtaraniganty mentions that the branch was pulled just before they updated develop and should have notified everyone. - It is suggested to add a pluginRepositories section in the pom file to resolve the issue. - Mnirmal asks why this change is not in develop yet and mentions having to update it again in another feature branch. - Rtaraniganty checks if the change is in develop.",9,"['build failure', 'temporary solution', 'pluginRepositories section', 'checking develop branch']","['missing tenant-deletion jar', 'latest release version not working', 'not in develop branch']",,[] 2023-04-04T10:48:27.496229,2023-04-04T11:13:17.123929,aganivada,"{'aganivada', 'bganganna'}",2,"- aganivada wants to deploy tenancy HF on stage to verify the invitation issue. - bganganna agrees to this request. - aganivada thanks Bhavana (bganganna) and mentions that they will deploy the fix to stage tomorrow, run some tests, and inform bganganna about the results.",3,[],[],,[] 2023-08-08T08:04:30.318739,2023-08-08T08:36:01.255479,gdahiya,"{'aganivada', 'gdahiya', 'rtaraniganty', 'askumar'}",4,- Tenancy is down in the INT environment. - Port 8080 was unhealthy and the service got redeployed. - The health check failed due to Auth0 taking too long to respond. - Auth0 recovered on its own without any action from the team. - There were socket timeouts while reaching Auth0 for health checks. - A support ticket has been raised with Auth0 to investigate the issue.,13,[],"['Tenancy is down', 'Port 8080 was unhealthy', 'Health check failed', 'Socket timeouts', 'Support ticket raised']",,[] 2023-07-27T02:47:42.108649,2023-07-27T05:24:34.659349,aganivada,"{'aganivada', 'rvaidya'}",2,- There have been frequent network failures due to rains in the last few days. - The network issues have caused delays in response. - The network issues are expected to be resolved by tomorrow. - Slack also experienced a service outage. - Some people were happy about the outage as it gave them a break from work.,3,['break from work'],"['network failures', 'delays in response', 'service outage']",MIXED,[] 2023-03-02T09:18:17.926499,2023-03-02T19:37:23.148239,sjain,"{'rtaraniganty', 'sjain'}",2,- sjain encountered a NullPointerException (NPE) while updating pvt during an integration test in the INT environment. - sjain provided reference logs from Logz.io to help identify the issue. - rtaraniganty offered to deploy the current develop version of core to see if it resolves the issue. - sjain confirmed that the issue was resolved after rtaraniganty's deployment.,5,"['NullPointerException', 'integration test', 'Logz.io', 'issue resolved']",['N/A'],POSITIVE,[] 2023-08-08T08:42:08.712319,2023-08-08T08:43:50.217539,rtaraniganty,{'rtaraniganty'},1,"- A new script called ""scale-gl-runner.sh"" was added to the release-tools repository. - There was a missing end-quote on line 30 of the script, which was fixed in a subsequent commit. - The script is used to bring up and down an instance called ""axmint"" forcefully. - The suggestion was made to add a feature to the script that would poll for the state of the instance and start it if it is ""stopped"" after a minute. - The goal is to avoid the need to perform any actions via the console.",2,"['new script added', 'missing end-quote fixed', 'script functionality', 'feature suggestion']",['missing end-quote'],,[] 2023-05-09T00:43:55.853069,2023-05-15T02:56:07.209469,nsrivastava,"{'aganivada', 'nsrivastava', 'rtaraniganty', 'hchintamreddy'}",4,"- The Jira integration in GitLab has broken links, with the Jira ID in the MR description leading to a 404 page. - It is suggested to update the MR template to include the Jira ID with the correct URL. - Mentioning the Jira ID in the commit message in Jira leads to an active link to GitLab. - The default Web URL in GitLab Jira integration needs to be updated from to . - There seems to be a bug in GitLab where it is not constructing the correct URL for Jira issues. - Other aspects of GitLab-Jira integration, such as recording commits and branches in Jira, work as expected. - A case has been opened with GitLab to address the issue with constructing the correct URL. - The issue seems to be resolved, and the Jira-GitLab integration is now working. - The Jira link is saved at the time the GitLab message/text is composed and is not retroactive. - Having a Jira ID in the MR title now has more value. - Wrapping the Jira ID in square braces helps GitLab recognize it and construct the correct URL. - The MR template should be updated to reflect the changes in the Jira-GitLab integration.",20,"['Jira integration in GitLab', 'Updating MR template', 'Active link to GitLab', 'Recording commits and branches in Jira', 'Jira-GitLab integration working', 'Jira ID in MR title']","['Broken links in Jira integration', 'Bug in GitLab URL construction']",POSITIVE,['COLL-1430'] 2023-08-08T20:32:20.939979,2023-08-08T20:50:22.372469,aganivada,"{'aganivada', 'pjha'}",2,"- Aganivada wants to investigate why a job terminated in the runner. - Pjha agrees to check the issue. - Aganivada mentions that re-running the job completed running all tests. - There seems to be an issue with the generation of an auth0 token, specifically with the ""Generate Auth0 Token For User"" keyword.",4,"['investigate job termination', 're-running job', 'completed running tests']","['issue with auth0 token generation', 'Generate Auth0 Token For User keyword']",,[] 2023-03-26T20:01:21.225479,2023-03-28T00:53:08.814399,rtaraniganty,"{'mli', 'rvaidya', 'pkarthikeyan', 'mnirmal', 'araman', 'rtaraniganty'}",6,"- The team is discussing how to include a new person in a pulse after it has been sent out. - One suggestion is to edit the config, but if that doesn't work, the new user can be added with a different persona and a pulse can be sent for that persona. - There is a new API that was developed to include the latest users added. - The API was executed and they are waiting to see if the notification goes through. - One team member is experiencing issues with the AXM Stage Slack Bot not loading and not receiving the latest pulse. - They are trying to figure out what happened, and one possibility is the time zone. - The team is looking at logs to identify the issue with the service user info entry. - They plan to run queries on the stage database to further investigate the issue.",14,"['Including new person in pulse', 'New API for adding users', 'Executing API for notification', 'Investigating AXM Stage Slack Bot', 'Looking at logs', 'Running queries on stage database']","['Issues with AXM Stage Slack Bot', 'Problem with service user info entry']",UNKNOWN,[] 2023-04-20T02:30:20.298699,2023-05-01T22:12:25.258309,ppant,"{'svummidi', 'rsrinivasan', 'rtaraniganty', 'ppant'}",4,"- The discussion is about the backend OpenAPI generated ApiClient changes and related MRs for accommodating multi-instance deployments. - There is a file called ""ApiClient.mustache"" that is being discussed. - It is mentioned that the file is generated from the openapi tools project. - There is a question about whether the template can be customized, and it is suggested to keep the original file in the repository for future updates. - It is clarified that the template for ApiClient is not generated from Swagger, but it uses it automatically. - The base template for ApiClient is taken from the webclient lib and modified. - It is suggested that the modified file should reside in core-libs-data-commons and its dependency should be added in the specification.xml of the projects to use it.",7,"['backend OpenAPI generated ApiClient changes', 'accommodating multi-instance deployments', 'file ApiClient.mustache', 'keeping original file in repository', 'base template for ApiClient', 'modified file in core-libs-data-commons', 'adding dependency in specification.xml']",[],,[] 2023-04-11T23:15:24.499599,2023-05-17T20:34:07.541339,ppant,"{'aganivada', 'rvaidya', 'anair', 'rtaraniganty', 'ppant'}",5,"- The team is discussing controlling invitation emails via feature flags. - Currently, Auth0 does not support multiple templates for invitation emails. - The team plans to introduce feature flags or a parameter in the API to use custom templates for invitation emails. - They are waiting for the template content to be shared so they can send custom notifications instead of using the default invitation template from Auth0. - They are discussing the wording and content of the invitation templates. - They are considering adding an enum field called ""invitationType"" in the API request body to indicate which templates to use. - The ""Complete SSO Migration"" button in the SSO migration email template should take the user to the Axiamatic login page. - The platform team is working on a change to stop Auth0 from sending invitation emails and instead have Tenancy send emails through their notification service. - They will add enums of various templates in the core-data-commons link and may need help from the apps team to add an additional parameter in the tenancy invitation API. - They will create Jira tickets for the tasks related to invitation flows. - The SSO migration script triggering invitation to the user is not required for the apps flow, as it is a backend-only flow for SSO migration. - The backend work for the email templates is completed, and they are discussing integrating it into the apps in version 0.9.12.",24,"['controlling invitation emails via feature flags', 'introducing feature flags or a parameter in the API for custom templates', 'waiting for template content to be shared', 'discussing wording and content of invitation templates', ""adding enum field 'invitationType' in API request body"", 'Complete SSO Migration button in SSO migration email template', 'platform team working on change to stop Auth0 from sending invitation emails', 'adding enums of various templates in core-data-commons link', 'creating Jira tickets for tasks related to invitation flows', 'backend work for email templates completed', 'integrating email templates into apps in version 0.9.12']","['Auth0 does not support multiple templates for invitation emails', 'may need help from apps team to add additional parameter in tenancy invitation API', 'SSO migration script triggering invitation not required for apps flow']",UNKNOWN,['DISC-2997'] 2023-05-22T22:01:24.842979,2023-05-22T22:02:39.471729,pjha,"{'aganivada', 'pjha'}",2,- pjha is requesting aganivada to review the multi-instance-deployment phase-1 changes from the infra side. - The changes have been enabled for the experiment-service and tenancy-service. - pjha provides a link to the merge request for aganivada to review. - aganivada suggests that pjha should post the reviews in the merge-requests channel.,2,"['multi-instance-deployment', 'infra side', 'experiment-service', 'tenancy-service', 'merge request']",['merge-requests channel'],,[] 2023-05-24T04:13:48.611679,2023-05-24T06:54:42.318159,hchintamreddy,"{'aganivada', 'sjain', 'hchintamreddy'}",3,"- There is an issue with a stuck deployment in INT for pulse manager. - The error message indicates a problem with pulling the image manifest. - hchintamreddy asks sjain if they have pulled the latest develop branch for CDK deployment before deploying. - hchintamreddy kills the current deployment and restarts it. - hchintamreddy suggests that the issue may be due to the image name having ""pulse-manager"" twice. - The deployment is eventually successful. - aganivada apologizes for not seeing the message earlier, but hchintamreddy assures them that the issue has been fixed.",11,"['stuck deployment', 'successful deployment']","['problem with pulling image manifest', 'issue with image name']",,[] 2023-01-16T04:38:02.157089,2023-01-24T03:14:22.695709,ppant,"{'aganivada', 'psomasamudram563', 'rtaraniganty', 'ppant'}",4,"- There are two calls made to core data for the `/api/v1/link/vendor/{tenantId}` API: one to get customers for each invite and another to get all customers for the tenant. - Pagination has been implemented for the `/api/v1/vendor/{tenantId}/customers` API using a list and its indexes to maintain page number and size. - It is difficult to sync the page number and size with the second call when going back and forth on the pages. - One suggestion is to get all the data in the backend but only present the slice that aligns with the pagination provided in the API. - Splitting the UI into two components, invitations and active customers, could make it easier to handle the pagination. - The V1 model for looking for customers from props is deprecated, so there is no need to worry about that. - The bigger problem is the consolidation of responses in the UI, which will be discussed with rvaidya. - There is a suggestion to add an API to core-data to narrow down the search instead of returning everything. - The idea is to get all the tenants of a page and craft an RSQL query predicate to get invitation contexts for specific customers only. - There are two sections for showing customers: those who were invited but have not accepted the invitation and those who have accepted the invitation. - Pagination became tricky because the sources were two different tables. - One suggestion is to filter all the invitations on a given tenant from tenancy itself and update the records with organization info wherever applicable. - The new API `/api/v1/link/vendor/paged/{tenantId}` has been deployed to int.",8,"['Pagination implementation', 'Splitting UI into components', 'Adding API to narrow down search', 'Updating records with organization info']","['Difficulties with syncing page number and size', 'Consolidation of responses in UI', 'Tricky pagination due to different sources']",UNKNOWN,[] 2023-08-18T17:37:41.473719,2023-08-21T06:01:32.181729,svummidi,"{'aganivada', 'svummidi', 'gdahiya', 'rtaraniganty'}",4,"- The recommended practice for production release with hot-fixes is to deploy the hot-fix build in int and stage, and then merge the branch before deployment to ensure what was tested is the same as what is deployed. - Deploying from the main branch is the safest approach, even though it may require an additional round of validation. - If two hot-fixes are running in parallel, deploying from one hot-fix branch could result in losing the changes in the other. - The team currently tests from the hot-fix branch and merges to main and develop before deploying to production. - The team creates a separate branch for each task/fix from the hot-fix branch and merges it back to the hot-fix branch. - The team manually updates the pom file versions in the sub-branches if they want to deploy and test specific changes on INT or STAGE. Otherwise, they keep the pom version the same as in the hot-fix branch.",8,"['recommended practice for production release with hot-fixes', 'deploying from the main branch', 'testing from the hot-fix branch', 'creating separate branches for each task/fix', 'manually updating pom file versions']",['losing changes in parallel hot-fixes'],,[] 2023-03-17T09:31:38.108989,2023-03-19T23:04:06.962849,ppant,"{'aganivada', 'askumar', 'rtaraniganty', 'ppant'}",4,"- The team is discussing the integration of Lambda with VPC. - One of the requirements is to update the lambda code to make API calls through a private domain instead of using API gateway. - The platform team will update common libraries and shared lambdas, while individual teams will be responsible for updating their own lambdas. - Some lambdas are not using common libraries, so additional work may be required for those. - The team needs help with updating three lambdas, while the platform team will handle the rest. - Getting all the lambdas to start using the common library is seen as a desirable side-effect. - The Apps Workflow Lambda will also be included in the updates.",11,"['integration of Lambda with VPC', 'updating lambda code', 'updating common libraries', 'updating shared lambdas', 'updating individual lambdas', 'getting lambdas to use common library', 'updating Apps Workflow Lambda']",[],,[] 2023-07-04T01:54:34.469649,2023-07-04T02:04:04.144789,askumar,"{'aganivada', 'askumar'}",2,- The maximum ACU for the INT Aurora cluster has been updated to 5 ACUs. - The decision to update the ACU was made due to high CPU utilization. - There was a plan to configure a proxy to better monitor the cluster before making the update. - The person who initially mentioned the update realized their mistake and acknowledged it.,3,"['ACU update', 'Proxy configuration', 'Acknowledgement']","['High CPU utilization', 'Mistake']",POSITIVE,[] 2023-03-03T03:12:39.789749,2023-03-06T22:37:30.131839,pjha,"{'pjha', 'rtaraniganty'}",2,"- The team is planning to make modifications to the service call endpoints. - They are looking for any lambda functions that communicate with their services. - The most comprehensive lambda function in terms of invoking various services is the tenant deletion lambda. - The lambda function uses the `app_config.get_api_domain_url()` method to get the API gateway URL. - By replacing `app_config.get_api_domain_url()` with something service-specific, they can cover almost all the services in one shot. - Other lambda functions like platform-lambda-triggerpulse and platform-lambda-useraction also make calls to the services via the API gateway using the same `app_config` and `API_GW_URL`. - The configuration for `event['service']` is defined in coordination_building_blocks/step_function_builder_stack.py and in the state machine definition.",4,"['modifications to service call endpoints', 'lambda functions communicating with services', 'comprehensive tenant deletion lambda function', 'usage of app_config.get_api_domain_url() method', 'replacing app_config.get_api_domain_url()', 'lambda functions platform-lambda-triggerpulse and platform-lambda-useraction', ""configuration for event['service']""]",[],,[] 2023-03-07T21:40:53.890079,2023-03-08T20:16:40.588309,ppant,"{'aganivada', 'rsrinivasan', 'rtaraniganty', 'ppant'}",4,"- The team is looking for a tool that provides business intelligence (BI) with visualization and an interface for executing SQL on selected databases and tables. - Apache Superset and Metabase are suggested as potential options for this tool. - The team prefers tools that they don't have to run themselves. - Imply Druid is mentioned as a tool with a good dashboard builder, along with Microsoft Power BI. - There is a suggestion to shift to a metric-based approach using events, where microservices post events with context to a bus like Kinesis or SQS, which can then be sent to Power BI, Druid, or Google GA for building dashboards. - It is suggested to split the data into static content from databases like Drona HQ or Retool, and stream event data to an analytical system like Snowflake for attaching time series and writing simple SQL queries for visualization. - The current model of scraping Prometheus has limitations, such as the inability to change the time it was scraped and the overhead of mapping IDs to names on services.",6,"['Business Intelligence (BI)', 'Visualization', 'SQL execution', 'Apache Superset', 'Metabase', 'Imply Druid', 'Microsoft Power BI', 'Metric-based approach', 'Event-based architecture', 'Kinesis', 'SQS', 'Google GA', 'Static content', 'Drona HQ', 'Retool', 'Analytical system', 'Snowflake', 'Time series', 'Scraping Prometheus']","['Running tools', 'Limitations of scraping Prometheus']",UNKNOWN,[] 2023-06-27T06:03:55.398049,2023-07-13T23:28:29.708619,aganivada,"{'aganivada', 'rtaraniganty', 'rvaidya'}",3,"- The team is discussing two approaches for fetching product admins based on solution id/id's for the CISO dashboard. - Approach 1 involves using the auth0 query API to search for product owners based on user metadata. - Approach 2 involves making core changes to maintain a DB table with web-app access users, their roles, and linked product instances they own. - The team agrees to start with Approach 1 for the CISO dashboard and later transition to Approach 2. - Approach 2 is seen as a longer-term option and may require more time for research and implementation. - The API specs should not change regardless of the approach chosen. - An epic will be created for Approach 2, but it will not be part of the CISO dashboard. - Once Approach 2 is in a decent shape and migration jobs are completed, a feature flag (FF) can be used to switch to Approach 2. - The API spec should remain consistent throughout the transition.",5,"['Approach 1', 'API specs', 'Consistent API spec']","['Approach 2', 'Longer-term option', 'More time for research and implementation', 'Epic not part of CISO dashboard', 'Migration jobs', 'Feature flag (FF)']",NEUTRAL,[] 2023-05-09T08:09:53.173979,2023-05-09T08:15:29.457549,aganivada,"{'aganivada', 'pjha'}",2,"- The user ""aganivada"" is deleting tenancy multi-instances deployed in int for now. - They plan to revisit the multi-instance changes during testing. - ""aganivada"" asks ""pjha"" if they want to keep ""*exp-stack-int-plat-1"" for validation. - ""pjha"" confirms that they want to reduce the desired instances to 0.",3,"['deleting tenancy multi-instances', 'revisit multi-instance changes', 'keep *exp-stack-int-plat-1 for validation', 'reduce desired instances to 0']",[],,[] 2023-03-28T20:12:25.503019,2023-04-02T23:18:22.362409,rtaraniganty,"{'aganivada', 'rsrinivasan', 'pmangalapuri', 'gshenoy', 'rtaraniganty'}",5,- The node count in the prod ES cluster has fallen from 7 to 4. - The cluster *oses-plat-evtmgmt-production* is no longer being used for searching and can be turned off. - The team plans to write data to S3 in batches and clean up periodically instead of using the *oses-plat-evtmgmt-production* cluster. - The downsizing of the *oses-plat-evtmgmt-production* cluster has been done in production. - A case has been opened to investigate the discrepancy in the node count. - There is a discussion about archiving events in S3 instead of using elastic search with S3 backup. - The team is considering setting a TTL for the events in S3 or finding a way to delete them. - CDK can be used to direct events to S3 and set up expiration policies. - The cost and duration of the TTL for events in S3 will be monitored. - The team has CDK code for Kinesis to S3 and can leverage it for this purpose.,25,"['downsizing of oses-plat-evtmgmt-production cluster', 'writing data to S3 in batches', 'archiving events in S3', 'setting TTL for events in S3', 'using CDK for directing events to S3']","['falling node count in prod ES cluster', 'cluster oses-plat-evtmgmt-production no longer used for searching', 'discrepancy in node count', 'discussion about archiving events in S3 instead of using elastic search', 'finding a way to delete events in S3', 'monitoring cost and duration of TTL for events in S3']",,"['DISC-2728', 'DISC-2728']" 2023-02-26T21:21:36.106199,2023-03-08T08:09:18.704979,snangia,"{'aganivada', 'pjha', 'askumar', 'rtaraniganty', 'snangia'}",5,"- The team is discussing whether there is a CDK support to update permission policies for the Gitlab-Agent-Role or if it needs to be done manually from the console. - It is mentioned that there is currently no way to update the permission policies using CDK. - The team discusses the need for access to an S3 bucket similar to the notification-data bucket. - It is mentioned that the S3 bucket already exists in all environments and the team just needs to push the latest version of the app to the existing buckets using AWS CLI. - The team requests aganivada to add S3 read-write permission for the teams-bot bucket in the INT environment. - aganivada updates the INT policy to include Get, Put, and Delete permissions for the teams-bot bucket. - It is mentioned that the S3 bucket write access also needs to be added in the stage and prod environments. - pjha is asked to make the necessary changes for S3 bucket write access in the stage and prod environments. - pjha adds the required project_path to the trust policy for the staging environment. - pjha mentions that the bucket is not present in the prod environment. - aganivada asks pjha to check the static content bucket in the prod environment for the S3 bucket. - pjha confirms that the bucket is not present in the prod environment. - pjha adds S3 access permissions to both the stage and prod environments. - snangia thanks pjha for the changes.",38,"['CDK support', 'S3 bucket access', 'Permission updates']",['Manual updates'],,[] 2023-01-04T10:20:01.708739,2023-01-05T20:08:28.312869,rtaraniganty,"{'aganivada', 'psomasamudram563', 'rtaraniganty', 'ppant'}",4,- The discussion is about connecting to a VPN from GitLab programmatically. - The VPN connection is required for system test jobs that communicate with service endpoints using s2s token. - The article mentioned in the discussion talks about using OpenVPN. - The user is planning to have a call with Perimeter 81 support to gather more information about VPN setup. - An alternative approach suggested is to add temporary credentials obtained from AWS STS Assume Role as a header for all requests made by tests. - The authorizer can validate the header by calling sts.amazon.com to verify the credentials. - The Perimeter 81 support mentioned IPSEC tunnels and OpenVPN tunnels as options for VPN setup. - They offer different pricing plans and a 30-day money-back guarantee. - The user suggests building an authorizer that logs the IP address of the caller and attaches it to the Gateway in INT for monitoring purposes. - The user mentions using O365 for authentication instead of AWS due to the lack of a single source of truth in AWS.,15,"['VPN connection', 'OpenVPN', 'Perimeter 81 support', 'IPSEC tunnels', 'OpenVPN tunnels', 'pricing plans', 'money-back guarantee', 'authorizer', 'O365 authentication']",['lack of single source of truth in AWS'],UNKNOWN,[] 2023-03-21T05:04:45.396849,2023-03-21T11:07:21.706529,ppant,"{'rtaraniganty', 'ppant'}",2,"- The discussion is about a specific issue with the `AxmFilter` class. - The issue is that when the filter is set to DEBUG level, some TRACE logs are also being seen. - The code of the `filter()` method is being reviewed, and it is noticed that in some cases, the method is using `level.isLessSpecificThan()` instead of `level.isMoreSpecificThan()`. - The person named rtaraniganty acknowledges the issue and states that it should be fixed.",2,"['AxmFilter issue', 'filter() method review', ""rtaraniganty's acknowledgement""]",['TRACE logs in DEBUG level'],,[]