File size: 76,018 Bytes
a31ba66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
Thread Timestamp,Originator,Unique Users,Summary,Message Count,Positive Topics,Negative Topics,Overall Sentiment
1693936814.718349,U02SCRTM2M7,"{'U03RSS0S76Y', 'U02GC8SE18V', 'U02SCRTM2M7', 'U0336QZAF98', 'U040RCBPBEC'}","The team discusses an issue where there are pulse instances in the score provider that do not exist in the pulse manager, causing the summary/sidebar to be empty. They consider excluding these old pulses using the pulse manager API. Pulse deletion is no longer supported from the dashboard.",12,"['Issue with pulse instances', 'Excluding old pulses using pulse manager API']",['Pulse deletion not supported from the dashboard'],NEGATIVE,[]
1690297147.829179,U0336QZAF98,"{'U0431DZTPJM', 'U0336QZAF98'}",The discussion thread is about errors in the estimation service in INT. The user U0336QZAF98 mentions that there are errors related to the pulse_survey_expire event and asks for it to be removed from the estimation-axm-sqs code. User U0431DZTPJM agrees to do it and provides a link to the commit and merge request.,4,[],"['errors in the estimation service', 'pulse_survey_expire event']",,[]

1689930443.185759,U02SCRTM2M7,"{'U02HQ78V9A5', 'U02SCRTM2M7', 'U040RCBPBEC', 'U02GC8SE18V'}","The team discusses an issue with querying the score-provider API. They share the request and response details, and mention creating a JIRA ticket for tracking the issue. They also mention deploying the latest build and checking the commit log for bug fixes. The issue is assigned to U040RCBPBEC for investigation and resolution.",9,"['querying score-provider API', 'sharing request and response details', 'creating JIRA ticket', 'deploying latest build', 'checking commit log', 'assigning issue to U040RCBPBEC']",['issue with querying score-provider API'],,[]
1689785132.353669,U055SRY74R0,"{'U055SRY74R0', 'U040RCBPBEC'}",The discussion is about running Llama2 locally on a Mac. U040RCBPBEC suggests using quantization to reduce the number of features in the embeddings for smaller hardware. They also mention that the quality of an earlier version of Llama was not great and it was slow. U055SRY74R0 acknowledges this and mentions that Microsoft Azure offers Llama in their model library for a price of 9K per month with 500GB RAM. No references or links are mentioned.,3,"['running Llama2 locally on a Mac', 'using quantization to reduce features']","['poor quality of earlier version of Llama', 'slow performance of earlier version of Llama']",NEUTRAL,[]

1684172726.358769,U02Q02G09QX,"{'U02Q02G09QX', 'U03ML44ADTQ', 'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about generating pulse questions for templates using AI. The team is seeking feedback on the questions generated by OpenAI. They discuss the number and type of questions, with suggestions for positive, negative, and investigative questions. They also mention the possibility of controlling the question generation parameters from the backend. There is a suggestion to explore product-specific questions by personas. Feedback is given on focusing on factors users have disliked about Splunk, such as pricing, data ingestion, and dashboards. The team discusses the impact of the questions on Splunk users and suggests specific questions related to query language, graphing and charting capabilities, alerting capabilities, and query functionality on logs.",9,"['feedback on questions generated by OpenAI', 'controlling question generation parameters', 'exploring product-specific questions by personas', 'suggestions for specific questions related to query language, graphing and charting capabilities, alerting capabilities, and query functionality on logs']","['focusing on factors users have disliked about Splunk', 'pricing', 'data ingestion', 'dashboards']",MIXED,[]

1691606946.827549,U0336QZAF98,"{'U0336QZAF98', 'U040RCBPBEC'}",The discussion thread is about adding a change to the hotfix branch. The change involves using the JsonJacksonCodec with Redisson to convert key and values to strings. It is suggested to define the ObjectMapper as a bean and pass it as a parameter to the redissonCacheManager. Existing caches need to be cleared and a merge request will be created tomorrow.,7,"['adding change to hotfix branch', 'using JsonJacksonCodec with Redisson', 'defining ObjectMapper as a bean', 'passing ObjectMapper as parameter to redissonCacheManager', 'clearing existing caches', 'creating merge request']",[],,[]

1690782580.793189,U03KLHDKL1H,"{'U02Q02G09QX', 'U02BV2DGUKC', 'U03KLHDKL1H', 'U03DHUAJVMK', 'U040RCBPBEC'}","The discussion thread revolves around a cleanup job that is failing due to an incorrect variable name. The Imply host is not active, causing the job to fail. The purpose of the cleanup job is to remove test segments created by end-to-end system tests. The job is considered safe as long as there is no real data within the specified time intervals. There is a suggestion to include scoring-related tests in daily sanity runs for better integration testing. No specific JIRA ticket or links are mentioned in the discussion.",15,"['cleanup job', 'safe job', 'scoring-related tests', 'daily sanity runs', 'integration testing']","['incorrect variable name', 'Imply host not active', 'job failure', 'no real data', 'missing JIRA ticket or links']",NEGATIVE,[]

1687248049.813769,U042KRZPXHT,"{'U03KLHDKL1H', 'U0336QZAF98', 'U042KRZPXHT'}",The discussion thread is about the estimation service being down in the INT environment. There was an update going on which caused deployments to fail due to a required dependency. The logs showed liquibase and bean creation errors. The team resolved the issue by deploying to a previous instance once the update was completed. The deployment was successful and the service is now up in INT.,7,"['successful deployment', 'service up in INT']","['estimation service down', 'deployments fail', 'required dependency error', 'liquibase error', 'bean creation error']",,[]

1687146626.762489,U03NZ7Z52S2,"{'U0336QZAF98', 'U03NZ7Z52S2', 'U040RCBPBEC', 'U02GC8SE18V'}","The dashboard was not loading due to a failure in the summary API, which was causing an error in getting a connection from the data source. The issue was investigated by rsrinivasan and bganganna rvaidya, and it was found that the connectivity to Druid was broken. After fixing the connectivity issue, the dashboard started loading again. The root cause of the issue was not mentioned in the discussion thread.",8,"['dashboard loading', 'connectivity issue fixed']","['summary API failure', 'connection error', 'Druid connectivity broken']",,[]

1683682555.691819,U040RCBPBEC,"{'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread includes comments on various code improvements such as fixing a typo in the requirements.txt file, upgrading the llama index, adding more details to the open text comment, renaming a class, implementing back-off handling, generalizing a class, considering caching cohort filter generation, and removing the limit on pulse instances. The thread also mentions the possibility of creating a summary using separate document chunks and suggests doing a POC along with summaries. No specific JIRA ticket or links are mentioned.",2,"['fixing typo in requirements.txt file', 'upgrading llama index', 'adding more details to open text comment', 'renaming a class', 'implementing back-off handling', 'generalizing a class', 'considering caching cohort filter generation', 'removing limit on pulse instances', 'creating summary using separate document chunks', 'doing POC along with summaries']",[],,[]

1693256845.409949,U026PMDB1ND,"{'U02Q02G09QX', 'U026PMDB1ND'}","The discussion thread revolves around the deployment issue of the 'vms-askai-service' repository. The user encountered a ""ModuleNotFoundError"" due to a missing module. It was suggested that the environment might not be set correctly. The user confirmed that it was their first time deploying the service from their current laptop. No JIRA ticket or other links were mentioned in the discussion.",51,"['deployment issue', 'vms-askai-service']","['ModuleNotFoundError', 'missing module', 'environment not set correctly', 'first time deploying']",,[]

1686821451.928689,U0336QZAF98,"{'U0336QZAF98', 'bot_message'}",The discussion thread is about the AvaticaPolarisHttpClient making a token call for every request. The issue has been logged as COLL-2577 and a task has been created to change the Avatica client to use an API key. The current solution is to add trace logging for acquiring the Polaris connection.,2,"['task created to change Avatica client to use API key', 'trace logging for acquiring Polaris connection']",['AvaticaPolarisHttpClient making token call for every request'],NEUTRAL,[]
1693534907.966809,U03DHUAJVMK,"{'U03DHUAJVMK', 'bot_message'}","The user, U03DHUAJVMK, pointed out that they only receive responses for ""factors influencing sentiment"" and get an error for other questions. They asked if this is a configuration change that can be included in version 0.9.14 HF. The bot, cc bganganna, created a bug ticket DAT-1332 to address this issue.",2,"['configuration change', 'version 0.9.14 HF']","['error', 'bug ticket DAT-1332']",,[]

1690414991.986729,U040RCBPBEC,"{'U0431DZTPJM', 'U0336QZAF98', 'U040RCBPBEC', 'U026PMDB1ND'}","The discussion thread involves team members discussing deployment issues and code changes related to the askAI service. There are mentions of errors in locating QDRANT_SECRET and QDRANT_URL, a typo in the code, issues with the poetry.lock file, and a revert of changes made by Anzar. The lock file is regenerated and the pipeline passes.",20,"['deployment issues', 'code changes', 'revert of changes', 'regenerated lock file', 'pipeline passes']","['errors in locating QDRANT_SECRET', 'errors in locating QDRANT_URL', 'typo in the code', 'issues with poetry.lock file']",,[]

1688621936.658079,U03RSS0S76Y,"{'U03RSS0S76Y', 'U040RCBPBEC'}","The discussion thread is about reviewing initial API specs for the CISO Dashboard. The score-provider gives a score at the project level, with pointDTO representing the current score and pointDiff.comparedTo representing the previous score. The API has been modified to only include trend difference and direction, rather than absolute score values for two periods.",5,"['API specs', 'CISO Dashboard', 'score-provider', 'pointDTO', 'trend difference', 'direction']",['absolute score values'],NEUTRAL,[]
1683610546.284609,U0336QZAF98,"{'U02Q02G09QX', 'U0336QZAF98'}","The discussion thread involves U0336QZAF98 and U02Q02G09QX discussing the usage and cost of the OpenAI token for an EC2 instance. U0336QZAF98 mentions that there were requests to the Davinci model on May 6 and May 8, and they analyze the cost based on a previous proof of concept (POC). U02Q02G09QX notices that the token's usage shows $0, indicating that no one has used AI chat that day. U0336QZAF98 confirms that they did not change the token and suggests investigating the surge in requests.",9,"['usage of OpenAI token', 'cost analysis', 'previous proof of concept']","['no usage of AI chat', 'surge in requests']",,[]

1678324817.675569,U040RCBPBEC,"{'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about reviewing the steps for the 0.9.9 update for the Druid spec. It is mentioned that the details need to be updated for the pulse instance status. The changes were made only for the follow-up comments count feature. There is a request to add instructions for ""pulse_instance_state"" and to merge the changes made in the INT environment.",5,"['reviewing steps', 'changes for follow-up comments count', 'adding instructions for pulse_instance_state', 'merging changes in INT environment']",['details need to be updated for pulse instance status'],NEUTRAL,[]
1693497864.592319,U0336QZAF98,"{'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about an issue with the tree summarizer in the llamaindex project. The error is caused by a maximum recursion depth being exceeded while calling a Python object. The impact and possible workaround are discussed, and it is suggested to make the tree summarizer configurable. The use of an org account for public interactions is also mentioned. A comment is removed, and it is suggested to handle the issue of empty input data by throwing an exception instead of disabling the tree summary mode.",7,"['issue with tree summarizer', 'possible workaround', 'making tree summarizer configurable', 'use of org account for public interactions']","['maximum recursion depth exceeded', 'comment removed', 'handling empty input data']",MIXED,[]

1693407797.790539,U040RCBPBEC,"{'bot_message', 'U03RSS0S76Y', 'U040RCBPBEC'}","The team discussed adding score drop notifications for the Voice of Team (VOT) feature in the next sprint. They planned to evaluate pulse scores and create an event with details like metadata, engagement and overall scores for both the pulse and product instances. They scheduled a meeting to discuss further details and timeline workflow. The JIRA ticket for this task is DAT-1329.",5,"['adding score drop notifications for VOT feature', 'evaluating pulse scores', 'creating event with metadata, engagement, and overall scores', 'scheduling meeting to discuss further details and timeline workflow']",[],POSITIVE,[]

1692793750.494569,U0336QZAF98,"{'U03RSS0S76Y', 'U0336QZAF98'}","The discussion thread is about the latest hotfix deployed in the stage and its impact on trend analysis. The user ""snangia"" asks for confirmation if trend analysis is working as expected. The user ""rsrinivasan"" inquires about any recent changes related to trend analysis. ""snangia"" clarifies that there were no changes in trend analysis, only refactoring of prompts configuration. However, they express uncertainty about potential side effects and request the tenant ID and product instance ID for testing. ""rsrinivasan"" acknowledges that something broke with the new prompts and shares a summary of the issue they encountered. They also mention that the replacement of ""N/A"" with custom code did not work as expected.",8,"['hotfix', 'trend analysis', 'refactoring']","['uncertainty', 'side effects', 'break', 'issue', 'replacement']",,[]

1692764252.589389,U0336QZAF98,"{'bot_message', 'U0336QZAF98', 'U040RCBPBEC', 'U02GC8SE18V'}","The discussion thread addresses the issue of duplicate items in the timeline summaries. It was discovered that the duplicates were caused by the score provider. The pulse instance ID ""71936"" appeared in the timeline summaries for June, July, and August. No references to JIRA tickets or other links were mentioned in the thread.",12,"['duplicate items', 'timeline summaries', 'score provider']",[],NEUTRAL,[]

1692228867.663659,U040RCBPBEC,"{'U0336QZAF98', 'U040RCBPBEC'}",The discussion is about the mapping of the timestamp column in Qdrant. rsrinivasan asks if it is mapped to the TTL of the pulse or the data ingestion time to Qdrant. akalyta confirms that they are also using it and asks if anyone has verified this. svummidi responds that it is mapped to the endedAt property of pulseInstance. No JIRA ticket or other links are mentioned.,2,"['mapping of timestamp column', 'using timestamp column']",['no JIRA ticket or links mentioned'],NEUTRAL,[]
1691163109.909519,U040RCBPBEC,"{'U055SRY74R0', 'U040RCBPBEC'}","U040RCBPBEC asks svummidi to check if their next sprint tasks have been added to the sprint, as they plan to start it in an hour. U055SRY74R0 responds that they have not received their upcoming tasks yet, but will add them once they do.",2,[],[],NEUTRAL,[]

1690415903.856139,U040RCBPBEC,"{'U0431DZTPJM', 'U0336QZAF98', 'U040RCBPBEC'}",The discussion thread is about confirming tasks for the next sprint. U040RCBPBEC asks ppant and rsrinivasan to confirm their tasks. U0336QZAF98 adds their tasks and plans to check with ppant. U0431DZTPJM also adds their tasks. No JIRA ticket or other links are mentioned.,4,"['confirming tasks', 'adding tasks']",['no JIRA ticket or links mentioned'],,[]
1681070696.868299,U042KRZPXHT,"{'U0336QZAF98', 'U042KRZPXHT', 'U040RCBPBEC'}",The discussion thread is about adding a `groupBy` field in the sentiment-analysis timeseries API. The user wants to know which fields are supported in `groupBy` and also requests to add `success` and `error` fields in the API. Another user suggests using HTTP status codes to determine success or error. The conversation includes links to the API documentation.,6,"['groupBy field', 'API documentation']",['error field'],,[]
1692205569.595599,U03RSS0S76Y,"{'U03RSS0S76Y', 'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread revolves around the issue of the score not changing for the Sprint Execution initiative. It is mentioned that the score did not change due to an ingestion issue, specifically the AskAi URL coming as empty. The team discusses the involvement of AskAi in the score change and suggests showing a message instead of an empty state when there are no significant changes in the pulse feedback. They also discuss the possibility of showing a message when there are no new pulses compared to the previous month.",19,"['Involvement of AskAi in score change', 'Showing message instead of empty state', 'Showing message when no significant changes in pulse feedback']","['Score not changing for Sprint Execution initiative', 'Ingestion issue with AskAi URL', 'No new pulses compared to previous month']",NEUTRAL,[]

1691513568.063179,U0336QZAF98,"{'U0336QZAF98', 'U040RCBPBEC'}","The team needs a script or tool like retool to easily populate dummy cache data for trend, timeline, and chat. The suggestion is to do it on-demand in this release and then build tooling based on usage patterns. For now, a wiki page explaining how to do it will suffice, with assistance from the data team.",2,"['retool', 'populate dummy cache data', 'wiki page']",['none'],,[]
1691089918.186659,U026PMDB1ND,"{'U02Q02G09QX', 'U026PMDB1ND'}","The discussion thread is about why a service is being bounced frequently. The reason is that there are several merges happening daily, and bouncing the service helps in quickly pulling the new image with the same tag. No specific JIRA ticket or other links are mentioned.",5,['quickly pulling new image'],['service being bounced frequently'],NEUTRAL,[]
1689603637.708599,U02GC8SE18V,"{'U03RSS0S76Y', 'U040RCBPBEC', 'U02GC8SE18V'}","The discussion is about the significance of the trendTaskId returned when querying for single or multiple tree nodes. The first question is whether the response can be queried using only the trendTaskId. The second question is whether all the nodes need to be queried together or if they can be queried individually with the same trendTaskId. The reason for these questions is to determine the best approach for querying the 2X2 matrix. There is also a suggestion to return task ids even if the data is already in cache, or to use the same API with individual nodes instead of the whole list. Additionally, there is a question about whether the engagementId should be sent by the apps or if the data should include it in the response. No specific details or references to JIRA tickets or other links are mentioned.",3,"['Significance of trendTaskId', 'Querying response using trendTaskId', 'Querying individual nodes with same trendTaskId', 'Returning task ids even if data is in cache', 'Using API with individual nodes', 'Including engagementId in response']",[],,[]

1694494882.287169,U0336QZAF98,"{'U03RSS0S76Y', 'U03NZ7Z52S2', 'U02GC8SE18V', 'U02SCRTM2M7', 'U0336QZAF98'}","The discussion thread includes a suggestion to check for caching issues in the estimation service. There is a proposal to exclude pulses where the closeTime is not within the start and end dates. There is also a suggestion to separate pollIds into pollIds and carriedOverIds for better query capabilities. Concerns are raised about additional API calls and the need for precomputed data. The issue of counts not matching when the summary feature flag is on is mentioned, and it is suggested to wait for the summaries to generate and check the close times of the pulses. The API response from the estimation service for the retro tenant, platform engagement is shared.",40,"['caching issues in estimation service', 'separating pollIds into pollIds and carriedOverIds', 'checking closeTime of pulses', 'API response from estimation service']","['additional API calls', 'need for precomputed data', 'counts not matching with summary feature flag on']",NEUTRAL,[]

1692285025.842729,U03RSS0S76Y,"{'U03RSS0S76Y', 'U040RCBPBEC'}","The discussion thread is about protecting a specific branch in a GitLab merge request. They also discuss finding a tenant with good data for trend analysis, and mention a tenant on the int environment. There is a formatting issue with the summaries, and they discuss using bullet forms and markdown for better formatting. No specific JIRA ticket or other links are mentioned.",6,"['protecting branch in GitLab merge request', 'finding tenant for trend analysis', 'using bullet forms and markdown for formatting']",['formatting issue with summaries'],,[]
1692186927.776359,U03DHUAJVMK,"{'U0431DZTPJM', 'U0336QZAF98', 'U03DHUAJVMK', 'U03RSS0S76Y'}","The discussion thread involves deployment issues with the askai hf on int and stage environments. There were merge requests and fixes made to resolve the issues. The migration job was started in the int environment, and after testing, the deployment was bounced in the stage environment. The thread also mentions summarizing pulses for tenants 15328, 17875, and 16992. No specific JIRA ticket or other links are mentioned.",30,"['merge requests', 'fixes', 'migration job', 'deployment bounce', 'summarizing pulses']",['deployment issues'],NEUTRAL,[]
1691464939.124319,U0336QZAF98,"{'U0336QZAF98', 'U040RCBPBEC'}",The discussion thread is about populating AskAi qdrant data in the retro tenant and releasing it. The plan is to load one pulse at a time using a Python script to avoid hitting thresholds. Another option is to create a migration job and API spec in AskAi to trigger the migration. The steps will be documented for future reference.,6,"['Populating AskAi qdrant data', 'Loading one pulse at a time', 'Using Python script', 'Creating migration job', 'Creating API spec', 'Documenting steps']",['Hitting thresholds'],NEUTRAL,[]
1691007887.306349,U03RSS0S76Y,"{'U03RSS0S76Y', 'U040RCBPBEC'}",The discussion thread involves svummidi requesting svummidi to clear the pipeline runner cache for the estimation service and protect a specific branch. snangia confirms that the cache has been cleared and rtaraniganty has changed the branch to a protected one. snangia also requests svummidi to update the branch back to develop after testing is completed. No references to JIRA or other links are mentioned.,3,"['Clearing pipeline runner cache', 'Protecting a specific branch', 'Cache cleared', 'Branch protected', 'Updating branch after testing']",[],,[]

1690882996.437169,U0336QZAF98,{'U0336QZAF98'},"The discussion thread is about changing the root path of the vms-services-aski to ""vms-services-askai"" to follow the same convention as other services. The user provides a command to update the remote URL and suggests running it at the root of the local repository. No references to JIRA or other links are mentioned.",4,"['changing root path', 'follow convention']",['no JIRA reference'],,[]
1690479597.265909,U040RCBPBEC,"{'U0431DZTPJM', 'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about an error encountered while installing a package using Poetry. The error is related to a digest mismatch, which is caused by a new build for the events_common package with the same version. The solution is to regenerate the lock file to include the new build. The team also discusses the need to ensure everyone is using the same version of Poetry. A document with common issues and a suggestion to mention the Poetry version in pyproject.toml is mentioned. The thread ends with a plan to include the diff in the merge request for further discussion. No references to JIRA or other links are provided.",11,"['solution to regenerate lock file', 'discussion about ensuring same version of Poetry', 'mentioning Poetry version in pyproject.toml', 'plan to include diff in merge request']","['error encountered while installing package using Poetry', 'digest mismatch caused by new build for events_common package']",NEUTRAL,[]

1678560732.520929,U040RCBPBEC,"{'U02Q02G09QX', 'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread revolves around the exploration of LlamaIndex for summarization use cases. The participants discuss the benefits and applications of LlamaIndex, such as sentiment improvements and holistic data inference. They also mention the possibility of creating a Wiki page to summarize the technology. The thread includes references to the LlamaIndex documentation and an article explaining its functionality. JIRA ticket: <https://pinstripe.atlassian.net/wiki/spaces/PRODUCT/pages/546340912/Creating+custom+GPT+indexes+with+LlamaIndex>",9,"['benefits of LlamaIndex', 'applications of LlamaIndex', 'sentiment improvements', 'holistic data inference', 'creating a Wiki page']",[],POSITIVE,[]
1676522127.824829,U02SF36PVKL,"{'U02SF36PVKL', 'U040RCBPBEC', 'U02GC8SE18V'}",The discussion thread is about tracking the relationship between parent questions and follow-up questions in the sentiment heatmap. They discuss passing information such as parentPulseQuestionId while pushing data to VMS and storing the relation in Postgres and Druid. They also mention the need for time range-based queries and additional groupBys in the Druid schema. A JIRA ticket for this discussion is mentioned: <https://pinstripe.atlassian.net/browse/COLL-1913>.,9,"['tracking relationship between parent questions and follow-up questions', 'passing information such as parentPulseQuestionId', 'storing relation in Postgres and Druid', 'time range-based queries', 'additional groupBys in Druid schema']",[],NEUTRAL,[]

1676192765.515999,U042KRZPXHT,"{'U0336QZAF98', 'U042KRZPXHT', 'U02GC8SE18V'}","The discussion thread is about the absence of the ""success"" and ""error"" fields in the response of sentiment-analysis APIs. User U042KRZPXHT created a JIRA ticket (DAT-840) to track this issue. User U0336QZAF98 mentions that some APIs in estimation also lack these fields. User U042KRZPXHT and U02GC8SE18V express surprise and suggest that this issue was overlooked previously.",4,['success and error fields in sentiment-analysis APIs'],['absence of success and error fields in sentiment-analysis APIs'],NEUTRAL,[]
1694038436.616079,U040RCBPBEC,"{'U0431DZTPJM', 'U02Q02G09QX', 'U03RSS0S76Y', 'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread includes plans for the upcoming week. U040RCBPBEC plans to work on refactoring API clients, POC work for passive monitoring, a bug related to score provider, and testing with Lastpass data. Other team members have plans related to fixing NaN values, deduplication, regression testing, chat improvements, retry configuration, hierarchical testing, celery opentelemetry integration, pulse exclusion, pulse responses data migration, event metrics summary, handling pulse close event, and integration test improvements. No specific JIRA tickets or links are mentioned.",7,"['refactoring API clients', 'POC work for passive monitoring', 'testing with Lastpass data', 'fixing NaN values', 'deduplication', 'regression testing', 'chat improvements', 'retry configuration', 'hierarchical testing', 'celery opentelemetry integration', 'pulse exclusion', 'pulse responses data migration', 'event metrics summary', 'handling pulse close event', 'integration test improvements']",['bug related to score provider'],NEUTRAL,[]
1692336883.766669,U02SCRTM2M7,"{'U02SCRTM2M7', 'U0336QZAF98'}","The discussion thread is about the failure of the Timeline API on the stage for the dashboard page. The error message indicates a problem with the script, specifically with the 'unpack' function. The issue was resolved after further investigation. No JIRA ticket or other links were mentioned.",4,[],['failure of Timeline API on stage for dashboard page'],NEGATIVE,[]
1692316890.207469,U040RCBPBEC,"{'U0431DZTPJM', 'U03RSS0S76Y', 'U040RCBPBEC'}",The discussion thread is about handling N/A summaries in the UI for pulses that closed without any participants. The team discusses clearing the cache at the tenant level and investigates issues with trend analysis summaries. They also mention deploying version 0.0.1.2 in the INT environment. No specific JIRA ticket or links are mentioned.,17,"['Clearing cache at tenant level', 'Deploying version 0.0.1.2 in INT environment']","['Issues with trend analysis summaries', 'Handling N/A summaries in UI for closed pulses without participants']",NEUTRAL,[]

1691665111.654759,U0336QZAF98,"{'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about the deployment of code and the use of the CDK. The main branch is still using old code, so deployment is being done from the release/0.9.14 branch or develop branch. There is also mention of editing in Redis directly. One person apologizes for using the incorrect CDK during testing.",4,"['deployment of code', 'use of CDK']","['old code', 'editing in Redis directly', 'incorrect CDK']",,[]

1690459135.080949,U02SCRTM2M7,"{'U02SCRTM2M7', 'U040RCBPBEC', 'U02GC8SE18V'}","The discussion thread is about duplicate entries in the response for the timeline API. U02SCRTM2M7 reported the issue and provided all the necessary details in the JIRA ticket DAT-1196. U02GC8SE18V asked for an update on the issue, and U040RCBPBEC apologized for forgetting to fix it but mentioned that the fix has been deployed and verified. U02SCRTM2M7 confirmed that the fix is working.",4,['fix deployed and verified'],['duplicate entries in response'],,[]
1689108850.114399,U040RCBPBEC,"{'U02Q02G09QX', 'U040RCBPBEC', 'U02GC8SE18V'}","The discussion thread is about testing data in the 44077 tenant. The user akalyta copied Mattel data from Prod to the tenant with the product name as Acme. The data is considered good for testing because it has user attributes and question dimensions. The user U02Q02G09QX is experiencing an error when logging into the int environment, but svummidi suggests clicking on engagements to see the list of projects. U02GC8SE18V asks svummidi to share the credentials 1:1, as they would like to test sorting heatmap on this data. rvaidya is invited as an admin to the tenant and is provided with the account password.",7,"['testing data', 'good for testing', 'user attributes', 'question dimensions', 'clicking on engagements', 'sorting heatmap']","['error when logging in', 'sharing credentials', 'providing account password']",MIXED,[]

1682866750.223539,U03DHUAJVMK,"{'U02BV2DGUKC', 'U03DHUAJVMK', 'U0336QZAF98', 'U042KRZPXHT'}","The discussion thread is about a null value for the date in production tenants. It is mentioned that a new field is being used in the backend to save the date, which may not be populated for old comments. The issue is being addressed by migrating the question created_at and updated_at fields to the estimation service. Repopulation of the data is required either through the PMS API or a database script. No bouncing of the estimation service is necessary. The change has been made for all production tenants.",8,"['new field in backend', 'migrating question created_at and updated_at fields', 'repopulation of data']",['null value for date in production tenants'],,[]
1682400569.629159,U040RCBPBEC,"{'U02Q02G09QX', 'U040RCBPBEC'}",The discussion thread involves a conversation between U040RCBPBEC (akalyta) and U02Q02G09QX (svummidi) regarding an issue with getting proper responses in their product. They suspect that the model is confused and are investigating the issue further. They decide to change the model to davinci and discuss the issue with embeddings in the responses. They also mention making updates and fixing errors on the server. No specific JIRA ticket or other links are mentioned.,15,"['model change to davinci', 'discussion on embeddings in responses', 'making updates on server', 'fixing errors on server']","['issue with getting proper responses', 'model confusion']",,[]

1682345455.049449,U040RCBPBEC,"{'U02Q02G09QX', 'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread includes updates on the POC demo, changes made to the EC2 Llama index for security and caching, and new findings for the design. It is mentioned that the stage ECS IP addresses were not added to whitelisting initially, but after adding them, testing in stage was successful. The use of sbert as an alternative to OpenAI embeddings is also discussed. The feedback collection API for the chat UI has not been started yet due to other tasks and cost issues. The issue of caching and loading the model from S3 is mentioned, and it is noted that the huggingface demo is not updated with the fixes.",7,"['POC demo updates', 'Successful testing in stage after adding ECS IP addresses to whitelisting', 'Discussion on using sbert as an alternative to OpenAI embeddings']","['Changes made to EC2 Llama index for security and caching', 'Feedback collection API for chat UI not started yet', 'Issue with caching and loading model from S3', 'Huggingface demo not updated with fixes']",MIXED,[]

1678930443.707149,U040RCBPBEC,"{'U0336QZAF98', 'U040RCBPBEC'}",The discussion thread involves users U040RCBPBEC and U0336QZAF98. U040RCBPBEC informs U0336QZAF98 about an NPE issue in the estimation service build and mentions making changes to fix it. U0336QZAF98 agrees with the fix and mentions that the implementation of `CollectionUtils.Mergelist` handles null cases. U040RCBPBEC adds that `List.of` is not tolerant of null values. No references to JIRA or other links are mentioned.,3,"['NPE issue fix', 'changes to fix NPE issue', 'implementation of CollectionUtils.Mergelist handles null cases']",['List.of not tolerant of null values'],,[]
1672805150.402259,U02GC8SE18V,"{'U02BV2DGUKC', 'bot_message', 'U02GC8SE18V', 'U042KRZPXHT', 'U040RCBPBEC'}","The discussion thread revolves around an issue with the API `/api/v1/value/latest/custom/grouped/{tenantId}`. The user svummidi reported that when using the scoreAggPeriod as ""year"", it returns null, but it works for ""month"". The user U040RCBPBEC suggests using ""month"" for now and creates a task to add a new option for querying all data. There is also a discussion about using the API to get the count of users and the ability to sort the response. A JIRA ticket DAT-765 is created to address the query for all data.",23,"['API issue', 'scoreAggPeriod', 'month', 'querying all data', 'count of users', 'sort response']","['null response', 'year']",,[]

1694159001.357959,U03NZ7Z52S2,"{'U0336QZAF98', 'U02SCRTM2M7', 'U03NZ7Z52S2'}","The discussion thread is about a discrepancy in the number of pulse instances and summaries in the retro tenant. It is mentioned that there are 5 responded pulse instances but 8 summaries, with 3 pulses being carried forward since February due to no data for those personas. The question is raised whether the carry forwarded pulse summaries should be shown, as it may cause confusion when showing the dates for the pulses. There is a mention of a fix by sfarooqh to only show pulse summaries within the time range, and it is confirmed that the fix has been deployed in the stage. The discussion concludes with a plan to test the fix once it is deployed. No JIRA ticket or other links are mentioned.",5,"['discrepancy in pulse instances and summaries', 'fix deployed in stage']",['confusion with pulse dates'],,[]
1691090298.911729,U026PMDB1ND,"{'U02Q02G09QX', 'U026PMDB1ND'}","The discussion thread is about a command line issue. The user initially used ""sh"" to run a bash script, but it was pointed out that it is not necessary. The user then successfully ran the script without ""sh"". The conversation also mentions a documentation error and a potential problem with a merge to the develop branch. No JIRA ticket or other links are mentioned.",14,"[""Successfully running script without 'sh'""]","['Command line issue', 'Documentation error', 'Potential problem with merge to develop branch']",,[]

1686844425.921059,U055SRY74R0,"{'U02HCMTQU3W', 'U055SRY74R0', 'U040RCBPBEC', 'U02GC8SE18V'}","The discussion thread is about the idea of creating a company-wide text document to log meanings and error messages for future reference. The document could be used to create a chatGPT-based FAQ/helper bot. The team currently uses Nexus for externalization of strings and has some UX writing standards. They do not have a tool for documentation yet, but there are plans to prioritize it. The idea is to create a corpus of knowledge while developing features to make it easier and of better quality. The goal is to provide instant responses to user questions about errors or the meaning of certain terms. There is a mention of a JIRA ticket for bringing in a documentation system.",6,"['Creating company-wide text document', 'Logging meanings and error messages', 'Creating a chatGPT-based FAQ/helper bot', 'Developing features for better documentation', 'Providing instant responses to user questions']","['Not having a tool for documentation', 'Lack of a documentation system']",POSITIVE,[]

1683689973.147549,U0336QZAF98,"{'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about the availability of the pgvector feature in AWS RDS for PostgreSQL. The feature is only available in version 15.2, so the team needs to upgrade their older versions of common instances to use it. The discussion also mentions conducting experiments with Postgres. No specific JIRA ticket or other links are mentioned.",2,"['pgvector feature', 'conducting experiments with Postgres']",['older versions of common instances'],,[]
1677137889.835179,U02GC8SE18V,"{'U0336QZAF98', 'U02GC8SE18V'}",The discussion thread is about deploying the latest score-provider in the INT environment. User U0336QZAF98 agrees to deploy the latest build. No specific details or references to JIRA tickets or other links are mentioned.,2,"['deploying latest score-provider', 'deploy latest build']",[],,[]

1692662865.672009,U040RCBPBEC,"{'U0431DZTPJM', 'U02BV2DGUKC', 'bot_message', 'U03KLHDKL1H', 'U026PMDB1ND', 'U040RCBPBEC'}","The discussion thread revolves around the issue of compatibility between the requests library and platform-libs-epoch in Python projects. The team considers downgrading the requests library or upgrading the Python version. They also discuss the possibility of loosening the version constraint for requests in platform-libs-epoch. Finally, they decide to update the requests version in epoch to allow all lambdas to use 2.28.2, while other projects can use any version they prefer. The fix for the coordination lambda has been deployed to production.",27,"['Compatibility between requests library and platform-libs-epoch', 'Updating requests version in epoch', 'Fix for coordination lambda deployed to production']","['Downgrading requests library', 'Upgrading Python version', 'Loosening version constraint for requests in platform-libs-epoch']",POSITIVE,[]

1691513756.511789,U03RSS0S76Y,"{'U03RSS0S76Y', 'U0336QZAF98', 'U040RCBPBEC'}",The discussion thread focused on improving the interpretation of survey data. They discussed changing question texts and answer choices to provide better context for AI. They also mentioned refining prompts and data storage for better output. They prioritized trimming the summary length and eliminating filler sentences from AskAI. No JIRA ticket or other links were mentioned.,11,"['improving interpretation of survey data', 'changing question texts and answer choices', 'providing better context for AI', 'refining prompts and data storage for better output', 'trimming summary length', 'eliminating filler sentences from AskAI']",[],NEUTRAL,[]

1691130426.600749,U040RCBPBEC,"{'U0431DZTPJM', 'U03RSS0S76Y', 'U0336QZAF98', 'U040RCBPBEC'}","The discussion is about whether to have separate APIs for getting the status and result of a workflow or to combine them into one API. Some suggest combining them into a single API, while others argue for keeping them separate. The idea of having a generic response object for all workflows is also mentioned. No specific JIRA ticket or links are mentioned in the discussion.",11,"['combining APIs', 'generic response object']",['separate APIs'],,[]
1689728000.722689,U02Q02G09QX,"{'U02Q02G09QX', 'U03RSS0S76Y', 'U0336QZAF98', 'U040RCBPBEC'}",The discussion thread is about adding integration for the score-provider in the AskAI repository. rsrinivasan snangia suggests generating the necessary score-provider file using pom. svummidi mentions that snangia is likely already working on it. U0336QZAF98 says they will generate the score provider and submit a merge request with a dummy API call for testing.,6,"['integration', 'score-provider', 'generate file', 'merge request', 'dummy API call']",[],,[]

1688624422.552659,U040RCBPBEC,"{'U0431DZTPJM', 'U055SRY74R0', 'U03RSS0S76Y', 'U040RCBPBEC'}",The team members are discussing adding a person named svummidi to the project. They mention syncing up with rsrinivasan for more tasks and creating placeholder tasks if needed. They also mention adding more tasks and confirming once svummidi is added. No specific JIRA ticket or other links are mentioned.,6,"['adding svummidi to the project', 'syncing up with rsrinivasan', 'creating placeholder tasks', 'adding more tasks']",[],,[]

1685707933.314279,U0336QZAF98,"{'U0336QZAF98', 'U040RCBPBEC'}","The user U0336QZAF98 is asking about the time-to-live (ttl) for the logs related to ""Acquiring token from Polaris for Avatica connection."" They mention seeing a large number of these logs in the INT environment. User U040RCBPBEC responds, saying they will fix the issue today. No JIRA ticket or other links are mentioned.",2,"['time-to-live (ttl)', 'logs', 'Acquiring token from Polaris for Avatica connection']","['large number of logs', 'INT environment']",POSITIVE,[]

1679983721.528599,U042KRZPXHT,"{'U042KRZPXHT', 'U040RCBPBEC'}","The discussion is about adding a `groupBy` feature to the aggregated counts API, allowing multiple fields to be grouped and the response to be a nested map. This feature would optimize API calls to fetch respondent userIds per question in a pulse instance. The use case is not related to the Reports unknown persona issue. No JIRA ticket or other links are mentioned.",3,"['groupBy feature', 'aggregated counts API', 'nested map', 'optimize API calls']",['Reports unknown persona issue'],,[]
1694534367.082639,U0336QZAF98,"{'U03RSS0S76Y', 'U0336QZAF98', 'U040RCBPBEC'}","The team discussed adjusting priorities for the sprint to ensure that the apps and UI teams are not blocked on data. They also mentioned cross-team dependencies for showing metric summary and score drop notification. They decided to provide metric name, description, score, and a prompt for executive summary for the metric summary. No specific JIRA ticket or other links were mentioned.",5,"['Adjusting priorities for the sprint', 'Ensuring apps and UI teams are not blocked on data', 'Providing metric name, description, score, and prompt for executive summary']",['Cross-team dependencies for showing metric summary and score drop notification'],NEUTRAL,[]
1691050280.119539,U0431DZTPJM,{'U0431DZTPJM'},"The discussion thread mentions that the estimation-service's test coverage is temporarily decreased to 83% for testing purposes. It is stated that the coverage will be reverted back soon. Later, it is mentioned that the coverage is back to 86% in a specific MR. The link to the MR is provided.",2,"['test coverage', 'reverted back']",['temporarily decreased'],POSITIVE,[]
1689005902.378459,U0336QZAF98,"{'U03RSS0S76Y', 'U0336QZAF98', 'U040RCBPBEC'}",The discussion thread is about syncing up for specs. svummidi and snangia have created a spec file and plan to go over it. They agree to discuss at 9PM PST or in the data meeting tomorrow. svummidi is asked to review the API specs offline. The updated API specs are shared with cc: rvaidya.,6,"['syncing up for specs', 'creating spec file', 'discussing at 9PM PST', 'discussing in data meeting', 'reviewing API specs offline', 'sharing updated API specs']",[],,[]

1685730449.053579,U040RCBPBEC,"{'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about changes to the build settings related to parallel tests. The test case ""cubefilter time bound test"" was changed to remove the ""_test"" suffix. The develop branch was deployed on INT, with integration test coverage at 86% and unit test coverage at 58%. The overall code coverage is now 93%, with the bar raised to 90% in the POM file. This change also reduces the overall build time in Gitlab by ~5 minutes.",6,"['changes to build settings', 'parallel tests', 'test case name change', 'deployment on INT', 'integration test coverage', 'unit test coverage', 'code coverage', 'POM file update', 'reduced build time']",[],,[]

1692828216.187349,U026PMDB1ND,"{'U040RCBPBEC', 'U026PMDB1ND'}","The discussion thread is about fixing the coverage in the GitLab CI configuration file. User U026PMDB1ND made a change to the file, replacing the coverage file path with a new one. User U040RCBPBEC thanked U026PMDB1ND for fixing the issue. No JIRA ticket or other links were mentioned.",2,"['fixing coverage', 'GitLab CI configuration file']",[],POSITIVE,[]

1691517646.699869,U0336QZAF98,"{'U03RSS0S76Y', 'U0336QZAF98', 'U040RCBPBEC'}",The discussion thread is about an issue in the code where the task_result.get() is being called even when the status is not success. One suggestion is to use task_result.ready() instead of checking the status string explicitly. Another suggestion is to handle errors using the Celery API and decide on the client side. It is mentioned that AsyncResult.get() re-raises the same exception with which the task failed in case of failure. The suggestion is to handle it by checking the status as SUCCESS case everywhere. There is also a mention of making changes to the global exception handler. No specific JIRA ticket or other links are mentioned.,6,"['task_result.ready()', 'handle errors using Celery API', 'handle exception in SUCCESS case', 'changes to global exception handler']",['task_result.get() called when status is not success'],NEUTRAL,[]
1690781325.882089,U03RSS0S76Y,"{'U03RSS0S76Y', 'U0336QZAF98'}","The discussion is about sending the output of a workflow from Askai service to an estimation service. The question is why only the workflow status is sent and then an API call is made to Askai to get the output, instead of sending the output directly in the event payload. The response mentions that the maximum size of an SNS message is 256 KB and suggests triggering an Askai request when the event is received to avoid an additional API call.",3,"['Sending workflow status', 'Triggering Askai request', 'Avoiding additional API call']","['Sending output directly in event payload', 'Limitation of SNS message size']",NEUTRAL,[]

1689749131.579689,U0336QZAF98,"{'U03RSS0S76Y', 'U0336QZAF98'}","The discussion thread is about the .env file needed for local testing of the askai flow via IDE. It includes various configurations such as Celery broker URL, Celery result backend, S2S client ID and secret, S2S token URL, AWS region, user pool ID, JWK URI, API gateway URL, Qdrant URL and secret, Open API key, jobs queue name and URL, PostgreSQL details, FastAPI configurations, and more. It is mentioned that they have switched to using SQS instead of Redis for the Celery broker.",3,"['local testing', 'IDE', 'Celery broker URL', 'Celery result backend', 'S2S client ID', 'S2S secret', 'S2S token URL', 'AWS region', 'user pool ID', 'JWK URI', 'API gateway URL', 'Qdrant URL', 'Open API key', 'jobs queue name', 'PostgreSQL details', 'FastAPI configurations', 'SQS']",['Redis'],POSITIVE,[]
1684940086.352399,U0336QZAF98,"{'U0336QZAF98', 'U040RCBPBEC'}","Microsoft has released a new way to do prompting, which can be found on their GitHub page. However, U040RCBPBEC is having trouble understanding it and asks for specific working examples in their application context. No JIRA ticket or other links are mentioned.",2,"['Microsoft released new prompting method', 'GitHub page']","['U040RCBPBEC having trouble understanding', 'No specific working examples', 'No JIRA ticket or other links mentioned']",,[]

1694100126.363299,U040RCBPBEC,"{'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about sharing the .env file for the stage. The .env file contains various environment variables such as the Celery broker URL, PostgreSQL credentials, job queue details, S2S client information, API gateway URL, Qdrant related details, Open API key, SNS event publisher ARN, FastAPI configuration, and LaunchDarkly environment name and API key. LD_ENV_NAME is set as ""integration"" and may need to be changed for the stage.",3,"['Sharing .env file', 'Environment variables', 'Celery broker URL', 'PostgreSQL credentials', 'Job queue details', 'S2S client information', 'API gateway URL', 'Qdrant related details', 'Open API key', 'SNS event publisher ARN', 'FastAPI configuration', 'LaunchDarkly environment name', 'LaunchDarkly API key']",['LD_ENV_NAME'],NEUTRAL,[]
1693316281.740999,U03RSS0S76Y,"{'U02Q02G09QX', 'U03RSS0S76Y'}",Akalyta noticed that a commit in the askai repository is breaking code in the estimation service. Snangia thanked Akalyta for noticing and said they would take a look. Snangia shared the merge request for the estimation service and said they would let Akalyta know once it's merged. Akalyta thanked Snangia and mentioned a question about a spec dependency added in the pom. They said they would merge the merge request for now to unblock.,4,"['noticing the code break', 'thanking for noticing', 'taking a look', 'sharing merge request', 'merging to unblock']","['code break in estimation service', 'question about spec dependency']",,[]

1691647040.353409,U040RCBPBEC,"{'U03RSS0S76Y', 'U040RCBPBEC'}","The discussion thread is about adding tasks to the sprint and setting deadlines for various items, such as controlling the length of the summary, verifying the caching feature for demos, and implementing metadata-based filtering. The team also discusses the strategy for creating and releasing hotfixes and suggests testing them in the INT environment before deploying in Stage. A link to the full list of items is provided.",3,"['Adding tasks to the sprint', 'Setting deadlines', 'Controlling the length of the summary', 'Verifying caching feature for demos', 'Implementing metadata-based filtering', 'Creating and releasing hotfixes', 'Testing hotfixes in INT environment']",['None'],NEUTRAL,[]
1691041108.761539,U03RSS0S76Y,"{'U03RSS0S76Y', 'U0336QZAF98'}","The discussion thread is about events not being received by the estimation service, despite being published correctly from the askai service. The team is discussing whether a filter for trend analysis completion events was added in the cdk. The queue name for the events is estimation-axm-events-sqs.",3,"['Published events from askai service', 'Estimation service', 'CDK filter for trend analysis completion events']","['Events not being received', 'Queue name: estimation-axm-events-sqs']",NEGATIVE,[]

1688393958.210069,U0336QZAF98,"{'U03RSS0S76Y', 'U0336QZAF98', 'U040RCBPBEC'}",The discussion thread involves multiple participants discussing issues with the prompt and JSON response in the production environment. They mention issues such as missing mixed responses in prompts and receiving malformed JSON responses. They also discuss testing in the INT environment and making changes to increase token limits. A JIRA ticket is created to track the issues discussed.,17,"['discussion of prompt issues', 'discussion of JSON response issues', 'discussion of testing in INT environment', 'discussion of increasing token limits', 'creation of JIRA ticket']","['missing mixed responses in prompts', 'receiving malformed JSON responses']",MIXED,[]

1686731456.284089,U03NZ7Z52S2,"{'U0336QZAF98', 'U03NZ7Z52S2', 'U040RCBPBEC'}","The discussion thread includes observations made regarding the Polaris Migration. Issues such as missing data, null values, and differences in count are mentioned. There is also a mention of a support ticket opened with Imply for further analysis. The Polaris ticket for this issue is provided as well.",7,[],"['missing data', 'null values', 'differences in count']",NEUTRAL,[]

1693519474.905259,U040RCBPBEC,"{'U0336QZAF98', 'U040RCBPBEC'}",The discussion thread revolves around the issue of the similarity threshold in chat questions. One user suggests reducing the threshold to improve the system's ability to answer basic questions. Another user argues for increasing the threshold to avoid situations where some documents have answers while others do not. The use of MMR and the problem of duplicate chunks are also mentioned. The thread concludes with the understanding that MMR will not solve the refine query problem. No references to JIRA tickets or other links are provided.,4,"['reducing similarity threshold', ""improving system's ability to answer basic questions""]","['increasing similarity threshold', 'avoiding situations with inconsistent answers', 'MMR', 'duplicate chunks', 'refine query problem']",,[]

1692879167.141179,U03RSS0S76Y,"{'U03RSS0S76Y', 'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about persisting sentiments in AskAI. The design proof of concept (POC) for this feature is reviewed and updated based on feedback. However, it is mentioned that this task is not a priority and can be taken up in future sprints. There is a suggestion to validate the requirement of getting sentiment as metadata in Qdrant, and it is stated that sentiment analysis can be used for trend analysis and other use-cases. The possibility of reusing the GPT model for sentiment analysis is also discussed. No specific JIRA ticket or other links are mentioned.",5,"['design proof of concept (POC) for sentiment persistence', 'feedback-based updates', 'validating sentiment requirement in Qdrant', 'using sentiment analysis for trend analysis', 'reusing GPT model for sentiment analysis']","['task not a priority', 'no specific JIRA ticket or links mentioned']",NEUTRAL,[]

1692245041.657209,U040RCBPBEC,"{'U03RSS0S76Y', 'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about configuring prompts for chat and resolving issues with Llama index merging results. The team members, rsrinivasan and svummidi, discuss making changes to improve prompts and provide links to documentation and code changes. Another team member, snangia, offers to help with prompt configuration. No specific JIRA ticket or other links are mentioned.",9,"['configuring prompts', 'improving prompts', 'help with prompt configuration']",['resolving issues with Llama index merging results'],,[]
1692175062.361149,U03RSS0S76Y,"{'U03RSS0S76Y', 'U0336QZAF98'}","The discussion thread is about the word count for short summaries and whether it can differ per workflow or if it is unified for all. Rsrinivasan confirms that the word count can differ per workflow, with the timeline requiring a smaller count compared to the trend. No JIRA ticket or other links are mentioned.",4,['word count can differ per workflow'],['no JIRA ticket or other links mentioned'],POSITIVE,[]
1686325099.114699,U0336QZAF98,"{'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about the issue of the pulse instance state streaming job not running for INT in Polaris. It is suggested to run a command from the Druid spec manager to start the job. It is also mentioned that if there is no data available in the Kinesis stream, the job fails to start. It is proposed to use a push model and a different data store for easier data updating. A change has been made to introduce a postgres storage in the scoring provider service to handle different kinds of exclusions. A merge request will be raised for the changes.",5,"['pulse instance state streaming job', 'run command from Druid spec manager', 'use push model for data updating', 'introduce postgres storage in scoring provider service', 'raise merge request for changes']","['job not running for INT in Polaris', 'job fails to start if no data available in Kinesis stream']",POSITIVE,[]

1685709620.751769,U0336QZAF98,"{'U0336QZAF98', 'U028EDANJM9'}",The discussion thread involves gshenoy and pmangalapuri discussing the deployment of a build to publish scores automatically. They mention using a feature branch for a new event and provide a sample payload for the event. They also provide a curl request for syncing on demand. The conversation ends with gshenoy mentioning merging into develop after addressing review comments. No specific JIRA ticket or other links are mentioned.,4,"['deployment of build', 'publishing scores automatically', 'using feature branch', 'sample payload', 'syncing on demand', 'merging into develop']",[],,[]

1683580131.403839,U040RCBPBEC,"{'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about the use of GPTDocumentSummaryIndex for routing questions in Llama index. The question is whether it is possible to use OpenSearch for the Document store and Index store in addition to the Vector store. The current design does not require it, but it may be needed for future prototypes. No specific details or references to JIRA tickets or other links are mentioned.",3,"['Use of GPTDocumentSummaryIndex for routing questions', 'Possible use of OpenSearch for Document store and Index store', 'Future prototypes']","['No specific details or references', 'No mention of JIRA tickets or other links']",NEUTRAL,[]

1682570683.085479,U0336QZAF98,"{'U0336QZAF98', 'U040RCBPBEC'}",The user U0336QZAF98 noticed that their token was hardcoded in a file shared by svummidi. They questioned if this was intentional as anyone could see it. User U040RCBPBEC responded that they accidentally added the file and removed the token. No JIRA ticket or other links were mentioned.,2,[],"['token hardcoded in file', 'security concern']",,[]

1682560786.137739,U03DHUAJVMK,"{'U02Q02G09QX', 'U03DHUAJVMK', 'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about testing the ""Ask AI"" feature with real data for accuracy and relevance. The response time is currently over 40 seconds, but improvements have been made. There is a suggestion to keep showing suggested questions on the results screen. Reprocessing data has been suggested to fix null values for some comments' dates. The user also requests to be added as a collaborator or product owner for a specific product.",12,"['accuracy and relevance', 'improvements made', 'showing suggested questions', 'reprocessing data']","['response time over 40 seconds', ""null values for comments' dates"", 'request to be added as collaborator/product owner']",NEUTRAL,[]
1678716518.044579,U02GC8SE18V,"{'U02BV2DGUKC', 'U040RCBPBEC', 'U02GC8SE18V'}",The discussion thread is about reviewing the vendor dashboard and requesting input from the score provider. There is also a mention of API changes for dashboard cards and a list of products with overall and engagement scores. There is a suggestion to start a de-comp page for this feature.,4,"['Vendor dashboard review', 'Input from score provider', 'Overall and engagement scores', 'Decomp page suggestion']",['API changes for dashboard cards'],,[]
1674621258.763489,U02GC8SE18V,"{'U02SF36PVKL', 'U033PPLNFRU', 'U040RCBPBEC', 'U02GC8SE18V'}",The discussion is about sending qualitative feedback to VMS as soon as it is received. The team is considering two options: adding a new boolean attribute or querying open pulse instances. They also discuss the possibility of using separate APIs for different pulse states. A JIRA ticket will be created to track the implementation of sending qualitative feedback.,8,"['sending qualitative feedback', 'adding new boolean attribute', 'querying open pulse instances', 'using separate APIs for different pulse states', 'creating JIRA ticket']",[],,[]

1692731923.943489,U040RCBPBEC,"{'U040RCBPBEC', 'U026PMDB1ND'}",The discussion thread is about sharing a list of tasks and discussing the benefits of using a master list or a list of tasks in Confluence before moving them to Jira. It is suggested that a mindmap might be even better for complex tasks. There is also mention of hierarchy view apps on the Atlassian marketplace that cost $50 to $100 per month.,5,"['benefits of using a master list', 'benefits of using a list of tasks in Confluence', 'benefits of using a mindmap for complex tasks']",['cost of hierarchy view apps on Atlassian marketplace'],,[]
1692209593.753829,U0336QZAF98,"{'U0431DZTPJM', 'U0336QZAF98'}","The discussion thread is about creating a hotfix for a bug in the url ingestion. The bug is related to not appending ""askai"" and causing a 404 error. The hotfix branch has been created and changes have been made to the client of ask ai and the test cases. The fix needs to be deployed in int and tested. JIRA ticket: <https://gitlab.com/axiamatic-main/vms-services-ingestion/-/merge_requests/56>",11,"['hotfix', 'bug fix', 'url ingestion', 'client changes', 'test cases', 'deployment', 'JIRA ticket']",['404 error'],,[]
1691001466.819129,U02Q02G09QX,"{'U02Q02G09QX', 'U040RCBPBEC'}",The discussion is about a problem with mocking the SummarizationService in the EstimationE2Etest. The issue is that tests relying on the SummarizationService are failing when the mocking is moved to the InsightsTestHelper class. The user asks if anyone has encountered this before or has any ideas on how to deal with it. Another user suggests checking other tests and offers to help if the issue persists. No JIRA ticket or other links are mentioned.,2,[],"['mocking problem', 'SummarizationService', 'EstimationE2Etest', 'tests failing', 'InsightsTestHelper', 'encountered before', 'ideas to deal with', 'no JIRA ticket', 'no links mentioned']",,[]

1690872034.620169,U02SCRTM2M7,"{'U02HCMTQU3W', 'U02SCRTM2M7', 'U040RCBPBEC', 'U02GC8SE18V'}",The discussion thread is about creating tickets for sending null values for the aggregated period. The issue is that it currently works at the product level but not at the initiative and dashboard levels. The team has added all the details and sample requests for working and missing null data cases in the ticket descriptions. The team also discusses the possibility of removing all null values to minimize the payload and expanding them in the UI to the desired starting date. They also consider cases where there is no VOT data at all and propose filtering all nulls from the data even for engagements. The proposed changes include returning empty arrays if an engagement or initiative or dashboard don't have any data and getting only one entry in all levels if there are scores only for this month. The JIRA tickets mentioned are DAT-1208 and DAT-1209.,5,"['Adding details and sample requests for working and missing null data cases', 'Removing null values to minimize payload', 'Expanding null values in UI to desired starting date', 'Filtering nulls from data even for engagements', 'Returning empty arrays for entities with no data', 'Getting only one entry in all levels if there are scores only for this month']","['Issue with sending null values for aggregated period', 'Not working at initiative and dashboard levels', 'No VOT data at all']",MIXED,[]

1690203553.509529,U0431DZTPJM,"{'U0431DZTPJM', 'U040RCBPBEC'}",The discussion thread is about reviewing options for custom caching key generation in Spring Boot. Option 1 is preferred because it is simple and provides maximum flexibility. Option 2 is considered hacky. There was a suggestion to have a single class for CustomKeyGenerator to reduce code. The code examples and links to the working MR and wiki page are provided.,3,"['Option 1', 'simple', 'maximum flexibility', 'single class', 'CustomKeyGenerator']","['Option 2', 'hacky']",,[]

1684339982.515349,U0336QZAF98,"{'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about the support for pgvector on Amazon RDS for Postgres and Aurora Postgres. U0336QZAF98 confirms that pgvector is only supported on Amazon RDS for Postgres, not on Aurora Postgres. U040RCBPBEC asks for documentation and creates a support ticket, and later confirms that there is a feature request for pgvector support on Aurora Postgres. U0336QZAF98 updates the links on a page related to Postgres and pgvector.",4,['pgvector support on Amazon RDS for Postgres'],['pgvector support on Aurora Postgres'],MIXED,[]
1677602050.352389,U02GC8SE18V,"{'U04EV0M2VA6', 'U040RCBPBEC', 'U02GC8SE18V'}","The discussion is about adding an API to show sentiment monthly/weekly for a product instance in the timeline view. One suggestion is to make 12 API calls to load the data for the frontend, while another suggestion is to wait for the next release to integrate the bulk API. There is also a suggestion to make the API call on click instead of hover, but it is not favored as it may not be clear that the timeline can be clicked on. No specific JIRA ticket or links are mentioned.",4,"['API to show sentiment monthly/weekly', 'Integrating bulk API in next release']","['Making 12 API calls for frontend', 'API call on click instead of hover']",NEUTRAL,[]

1692312916.773879,U040RCBPBEC,"{'U0431DZTPJM', 'U0336QZAF98', 'U040RCBPBEC'}",The discussion thread is about a migration job in INT for task ID 5e5a4098-08f7-4c99-baf2-e850cdb7f389. The job took one hour to complete but generated errors related to the content being empty. The team is planning to add safety conditions to handle this issue. There is a JIRA ticket raised by ppant for further investigation.,3,"['migration job', 'completed in one hour', 'team planning to add safety conditions']","['errors related to empty content', 'JIRA ticket raised for investigation']",,[]

1691731049.505329,U040RCBPBEC,"{'U0431DZTPJM', 'U040RCBPBEC'}","The discussion is about changing the caching approach for timeline pulse instances. Currently, a list is used for caching, but it is not effective because the list can change daily. The suggestion is to use String for all cache keys and cache individual pulses instead of a list. There is no mention of any JIRA ticket or other links.",2,"['Changing caching approach', 'Using String for cache keys', 'Caching individual pulses']","['Ineffective list caching approach', 'List can change daily']",,[]

1690996407.343479,U040RCBPBEC,"{'U03RSS0S76Y', 'U0336QZAF98', 'U040RCBPBEC'}","In the discussion thread, svummidi provided credentials for accessing Mattel data in the Acme engagement under tenant 44077. They also mentioned sending an admin invite and suggested using their credentials if needed. Another user, U0336QZAF98, thanked svummidi and confirmed they would use the provided credentials. Later, svummidi mentioned adding responses from Mattel to tenant 51165 on INT. They also reported an issue with the feature not recognizing any data. U0336QZAF98 acknowledged the issue and mentioned that the question text was not populated correctly.",7,"['credentials provided', 'admin invite', 'thank you']","['issue with feature', 'question text not populated correctly']",,[]

1687257247.444389,U0336QZAF98,"{'U0336QZAF98', 'U040RCBPBEC'}",The discussion thread is about a warning that keeps appearing when using PolarisHttpClient. The user suspects that the configuration for Hikari is not matching with the underlying HTTP connection keep alive. The default HTTP keep alive is unknown. The user has submitted a request for support and is waiting for suggestions on any necessary changes. The maxLifetime value is currently set to 60 seconds.,2,"['PolarisHttpClient', 'configuration', 'support', 'suggestions', 'maxLifetime']","['warning', 'Hikari', 'HTTP connection keep alive']",NEUTRAL,[]

1692801723.802269,U040RCBPBEC,"{'U03RSS0S76Y', 'U040RCBPBEC'}","In the discussion thread, U03RSS0S76Y asks U040RCBPBEC if they can assign task 3 to themselves since U040RCBPBEC hasn't started on it yet. No other specific details or references are mentioned.",2,[],['task assignment'],,[]
1692324222.035889,U040RCBPBEC,"{'U02HCMTQU3W', 'U03RSS0S76Y', 'U040RCBPBEC'}","The discussion thread is about the readiness of three features (Trend Summary, Event Summary, and AskAI) for release. It is mentioned that all three features work as expected in the most happy path, but there are some issues with generating summaries and accuracy in chat responses. It is suggested to curate the list of questions manually for AskAI and to keep the feature disabled for production customers using feature flags. There is also a suggestion to have a feature flag on the UI side to not show the summaries but still generate them in the backend for improvement purposes.",7,"['Trend Summary', 'Event Summary', 'AskAI']","['generating summaries', 'accuracy in chat responses', 'curate list of questions', 'feature flags', 'UI side']",,[]

1690980311.800389,U042KRZPXHT,"{'U0431DZTPJM', 'U03RSS0S76Y', 'U042KRZPXHT'}",The discussion thread is about an issue with the AskAI service throwing a 503 error while calling the ask question API in the estimation service. The issue was initially reported by U042KRZPXHT and was confirmed by U0431DZTPJM. The service count was decreased to 0 to complete an update rollback. The issue was resolved after redeploying the AskAI service by rsrinivasan.,9,"['503 error', 'AskAI service', 'ask question API', 'estimation service', 'issue resolved', 'redeploying']",['update rollback'],,[]
1687164148.211529,U02GC8SE18V,"{'U040RCBPBEC', 'U02GC8SE18V'}","The discussion thread is about the difference between `rawChange` and `engRawChange` in the widget response. It is clarified that `engRawChange` and `engPercentChange` are related to a new feature for unification of engagement score with value score, but they are not ready yet and should not be used.",2,"['rawChange', 'widget response']","['engRawChange', 'engPercentChange', 'new feature', 'unification', 'engagement score', 'value score']",,[]

1683138397.034019,U02Q02G09QX,"{'U02Q02G09QX', 'U0336QZAF98', 'U040RCBPBEC'}",The discussion thread is about implementing logging in the EC2 app_api.py file. rsrinivasan explains that the logging level needs to be initialized before any of the llamaindex imports. U040RCBPBEC thanks rsrinivasan and mentions that akalyta eventually figured it out. No JIRA ticket or other links are mentioned.,3,"['implementing logging', 'logging level initialization']",[],,[]

1692292520.089039,U0336QZAF98,"{'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about cache configurations missing in the estimation service. The cache names were changed to prefix the service name for easier identification in Redis. The issue was resolved by adding the cache configurations in the yaml file. The messages about cache miss for pulse instance summaries indicate that the feature flag is enabled but data is not migrated. After data migration, the messages are only seen once on first load. The issue was verified in the stage environment and it looks okay. The relevant commit can be found at <https://gitlab.com/axiamatic-main/vms-estimation-service/-/commit/f96285265e707504730d34231950c0d99fd2fee1>.",4,"['cache configurations added', 'issue resolved', 'data migration completed', 'issue verified in stage environment']","['cache configurations missing', 'cache miss for pulse instance summaries']",POSITIVE,[]
1691425546.761109,U0336QZAF98,"{'U03RSS0S76Y', 'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread is about investigating the issue of multiple calls to the askai/status API while the aksai workflow is in a pending state. The team confirms that the cache is working fine and that the method is executed only once. However, the API calls are being executed twice in quick succession. The team will further investigate the issue. No references to JIRA tickets or other links are mentioned.",8,[],"['multiple calls to askai/status API', 'API calls executed twice']",,[]

1691048525.071189,U03RSS0S76Y,"{'U03RSS0S76Y', 'U02GC8SE18V'}",The discussion is about the inclusion of the same pulse instance IDs for both current and previous months in the askai trend workflow. The team is debating whether to exclude the IDs from the previous month to allow for proper trend analysis. The issue is related to timeline analysis and the need for contrasting data. The team is considering the delta value and the closure of pulse instances in determining the inclusion or exclusion of the IDs. The discussion can be found at <https://axiamatic.slack.com/archives/C05D1S3865D/p1690937418437499?thread_ts=1689048143.815829&amp;cid=C05D1S3865D>.,7,"['inclusion of same pulse instance IDs', 'proper trend analysis', 'delta value', 'closure of pulse instances']","['exclusion of IDs from previous month', 'timeline analysis', 'contrasting data']",NEUTRAL,[]

1691039532.337609,U0336QZAF98,"{'U02HCMTQU3W', 'U03RSS0S76Y', 'U0336QZAF98', 'U02GC8SE18V'}","The discussion thread is about an error message related to the ""summary_type"" not being supported yet. The error occurred during trend analysis summary for voice arguments. The team discusses the use of ""voc"" (voice of customer) instead of ""voice"" for APIs. They mention checking with pmangalapuri to fix the issue and confirm that the latest app is deployed with ""voc"" and ""metrics"" from the backend to the estimation service. No references to JIRA tickets or other links are provided.",8,"['voc', 'metrics', 'backend']","['summary_type', 'error message', 'trend analysis']",,[]

1690790736.042319,U0336QZAF98,"{'U02D4DUKDQC', 'U0336QZAF98', 'U040RCBPBEC'}","The discussion thread involves multiple participants discussing the status of a merge request for the Polaris project. One participant mentions manually copying files into their branch and deploying them. Another participant asks for instructions to be updated for a stage deployment, and suggests adding steps for a production deployment in the VPC. The thread ends with a participant mentioning that they have updated the deployment dependencies page. No specific details or references to JIRA tickets are provided.",4,"['Merge request status', 'Manual file copying', 'Deployment instructions', 'Stage deployment', 'Production deployment', 'VPC', 'Deployment dependencies page']",[],,[]

1692894000.139349,U02BV2DGUKC,"{'U0336QZAF98', 'U02BV2DGUKC'}","The discussion is about measuring the health of Polaris. They plan to link the service health check with Polaris, so if Polaris is down, the service will also go down. They will use LB health report to track the number of times Polaris has gone down. They also consider passive monitoring by polling Polaris and reporting the status using allocated resources like Lambda. A sidecar will be started to pull metrics from Polaris. A task will be added to address this.",3,"['linking service health check with Polaris', 'using LB health report to track Polaris downtime', 'passive monitoring by polling Polaris', 'reporting Polaris status using allocated resources', 'starting a sidecar to pull metrics from Polaris', 'adding a task to address the issue']",[],,[]

1678856222.260049,U02HCMTQU3W,"{'U02HCMTQU3W', 'bot_message', 'U03MSMV0W66', 'U02GC8SE18V', 'U0336QZAF98', 'U040RCBPBEC'}","The team is discussing an exception causing a 500 error in the Teams->Teammates tab. The exception is related to a SQL query encountering a ClassCastException. They are also discussing the use case for getting the engagement score by user and considering removing the ""avg participation"" column. They are also discussing a bug in the UI not showing the user engagement score. A JIRA ticket (DAT-902) has been created to track the issue.",18,"['exception causing 500 error', 'use case for getting engagement score by user', 'JIRA ticket created']",['bug in UI not showing user engagement score'],,[]