Type
stringclasses
4 values
Evaluation Variables
stringlengths
4
24
Definition
stringlengths
66
481
Scale
stringlengths
31
419
Must Have
Factual consistency
Measures whether recommendations are factual or non-factual. e.g.: - identical scores as in input data - identical behavior names as in input data - correct linking of productive behaviour with friction
0 = Non-Factual: The response contains false information 1 = Factual: The response is based on accurate facts
Must Have
Sensitivity
Ensures that recommendations are respectful, considerate, and culturally sensitive, avoiding language or suggestions that could be perceived as offensive or inappropriate
0 = Insensitive 1 = Sensitive
Must Have
Completeness
Measures whether output answers questions asked, both in terms of contents. (e.g.: if it asked for recommendations for all behaviors, make sure all behaviors are answered. If it asks the most relevant behavoris, make sure only the relevant behaviors are being tackled, based on tracker data, address the most relevant behaviors based on past data (low scores for PB, highest scores for frictions). The answer should include all of the components that are requested in the prompt.
0 = The output fails to address all aspects of the question asked, missing key components or not providing a comprehensive answer based on the requirements stated in the prompt. 1 = The output comprehensively addresses all aspects of the question asked, providing a complete answer that covers the required content and scope.
Must Have
Relevance
Assesses the LLM's ability to comprehend and incorporate the user鈥檚/team's specific business context and personal details into the conversation/recommendation
0 = Doesn't incorporate user's context 1 = Incorporates user's context
Quality
Relevance
Assesses the LLM's ability to comprehend and incorporate the user鈥檚/team's specific business context and personal details into the conversation/recommendation
1: Rarely recognizes or incorporates specific business or personal context. 2: Sometimes acknowledges context but inconsistently integrates it into responses. 3: Often understands and uses context effectively in conversation. 4: Always incorporates detailed, relevant context into all aspects of the dialogue.
Quality
Strategic Value Impact
Evaluates the strategic depth of the advice given, focusing on long-term impact and alignment with business objectives.
1: Provides no or very limited strategic context. 2: Occasionally offers strategies but lacks a long-term perspective. 3: Regularly provides strategic advice that considers future implications. 4: Always offers deep strategic insights that are actionable and forward-thinking.
Quality
Creativity
Assesses the originality and inventiveness of the recommendations. Checks if the submission presents unique solutions, innovative approaches, or new perspectives that deviate from standard practices, thereby stimulating thought and encouraging unconventional problem-solving.
1 = Lacks originality; solutions are standard or clich茅d with no unique elements. 2 = Shows some original elements but largely sticks to conventional ideas. 3 = Balances between innovative and conventional ideas, presenting familiar concepts with novel twists. 4 = Highly creative, offering unique, innovative solutions and fresh perspectives that challenge traditional approaches.
Quality
Comprehensiveness
Measures the extent to which recommendations cover a wide variety of dimensions of how to manage the workplace, such as communication, time management, team management, process, feedback, governance, strategy, system and tools, KPIs, roles and responsibilities, meeting discipline, decision making,, training, resources allocation/ budgeting
1 = Addresses 1-2 dimensions. 2 = Covers three dimensions. 3 = Addresses four dimensions. 4 = Covers more than four dimensions.
Quality
Credibility聽
Assesses the reliability and validity of the proposed actions, ensuring they are based on established best practices, credible sources or relevant insights or data
1 = Not supported by best practices, credible sources, relevant insights or data 2 = Some support from best practices or credible sources, relevant insights or data 3 = Mostly supported by best practices or credible sources, relevant insights or data 4 = Fully supported by best practices or credible sources, cites specific sources and gives link
Quality
Rationale
Explains the rationale behind the recommendations e.g.: explains why it is important, why is it useful to implement these actions within the specific context of our clients
1 = No rationale shared 2 = Rationale shared for some recommendations 3 = Rationale shared for most of the recommendations 4 = Rationale shared for all recommendations
Quality
Measurability
Assesses whether the recommendations include specific, quantifiable metrics or benchmarks that allow for tracking progress and evaluating the effectiveness of the actions over time.
1 = None of the actions can be concretely measured 2 = Minority of the actions can be concretely measured 3 = Majority of the actions can be concretely measured 4 = All of the actions can be concretely measured
Quality
Feasibility
Evaluates how realistically the recommended actions can be implemented within the client's specific control e.g. Takes into consideration the user's/team's scope of responsibilities, resource constraints, budgeting...
1 = Outside of my scope 2 = Inside users scope but high level of resources needed 3 = Inside users scope but medium level of resources needed 4 = Inside users scope and low level of resources needed
Quality
Action oriented聽
Evaluates the specificity and clarity of the actions proposed, focusing on the presence of concrete examples and/or steps that guide implementation.
1 = No concrete actions nor step-by-step guide 2 = Basic examples are being shared 3 = Some examples or some elements of step-by-step guide are being shared for most or all of the recommendations 4 = Concrete examples of actions and step-by-step guides of how to implement them to all of the recommendations
Quality
Insightfulness
Recommendations provide insights, examples beyond general statement of the issue at hand based on the behavioral data. e.g.: These insights could be the past actions that the team has already thought of
1 = Generic, recommendation only gives a general statement of the issue 2 = Partially relevant, gives added information only on parts of the issues 3 = Mostly relevant, gives added information on most of the issues 4 = Highly relevant, gives added information on all of the issues
Packaging (email)
Structure
Evaluates whether the email structure follows the request (prompt) by the client team
0=Not following the requested structure 1=Follows the requested structure
Packaging (email)
Formatting
Evaluates the use of headings, bullet points, and white space for readability.
1 = Poorly Formatted: The email lacks clear formatting, making it difficult to read or scan. 2 = Adequately Formatted: Basic formatting is used, but more could be done to enhance readability. 3 = Well-Formatted: Good use of headings, bullet points, and white space makes the email easy to scan and read. 4 = Excellently Formatted: The email鈥檚 formatting is exemplary, significantly enhancing readability and engagement.
Packaging (email)
Clarity
Evaluates how clearly and understandably the recommendations are communicated
1 = vague, confusing, or overly complex, includes unnecessary details. 2 = Some parts of the email are understandable, but clarity is inconsistent. 3 = Recommendations are communicated in a straightforward manner, with minimal ambiguity. 4 = Email excels in clarity, with all recommendations presented in an easily understandable way, free from any confusion and unnecessary details.
Packaging (email)
Tone
Evaluate whether the tone follows the requests by the client team
0=Not following the request by client team 1=Following the request by client team
Packaging (email)
Engagement
Evaluates whether the email drives the engagement from our clients (e.g.: it can be clicking a link in the email, forward the email, opening the email etc)
1=Not significant engagement activity, some negative engagement (e.g.: client unsubscribes the email) 2=Low engagement activity 3=Medium engagement activity 4=High engagement activity
Packaging (email)
Thought provoking
Evaluate how the contents and actions are packaged that help trigger clients to think about things differently. (e.g.: ask provoking questions, gives an attractive name to the action)
0=Not provoking 1=Thoughts provoking, it triggers different thinking
Packaging (chatbot)
Engagement Techniques
Measures the effectiveness of the bot鈥檚 techniques to engage the user, such as asking questions, initiating topics, or encouraging discussion.
1: Rarely uses engagement techniques; mostly passive in conversations. 2: Uses basic engagement techniques but often fails to sustain conversation. 3: Effectively uses engagement techniques to maintain a lively conversation. 4: Masterfully engages with advanced conversational strategies, creating a highly dynamic and engaging dialogue.
Packaging (chatbot)
Response Prompting
Evaluates how the bot鈥檚 responses encourage the user to think deeply or view issues from new perspectives.
1: Responses do not encourage reflection or new insights. 2: Occasionally prompts thought but lacks consistency. 3: Often prompts user to reflect or reconsider positions. 4: Consistently stimulates deep thinking and offers new perspectives.
Packaging (chatbot)
Feedback Responsiveness
Assesses how well the bot adjusts its responses based on user feedback during the conversation.
1: Ignores feedback; no adjustment in conversation. 2: Occasionally adjusts responses when feedback is explicit. 3: Regularly modifies approach based on feedback to better suit user needs. 4: Seamlessly integrates feedback, continually refining its responses and approach.
Packaging (chatbot)
Tone Adjustment
Measures the bot鈥檚 ability to adapt its tone to match the user鈥檚 mood and the sensitivity of the conversation.
1: Tone is static and unresponsive to the user's emotional state. 2: Somewhat adjusts tone, but often misses the mark or is delayed. 3: Generally matches tone appropriately with user鈥檚 mood and conversation context. 4: Perfectly and promptly adjusts tone, enhancing conversation quality and user comfort.
Packaging (chatbot)
Emotional Responsiveness
Evaluates the bot鈥檚 ability to correctly identify and acknowledge the user鈥檚 emotions based on their expressions or tone.
1: Fails to recognize or appropriately respond to user emotions. 2: Recognizes basic emotions but may misinterpret nuances. 3: Regularly identifies and understands user emotions accurately. 4: Excellently perceives and responds to a wide range of emotional cues.
Packaging (chatbot)
Supportive Communication
Assesses how well the bot communicates in a supportive, affirming manner, reinforcing the user's confidence and comfort.
1: Often communicates in a way that may feel impersonal or unsupportive. 2: Provides support but not consistently affirming or encouraging. 3: Generally supportive and encouraging, enhancing user confidence. 4: Always communicates supportively, making the user feel valued and understood.

ConvoQPack

This repository contains the evaluation criteria, definitions, and scales for Hubble_S's LLM-powered chatbot and recommendation system.

Our evaluation framework consists of two stages:

  1. Quality Evaluation: Assesses must-haves and quality criteria for both the chatbot and recommendation system.
  2. Packaging Evaluation: Separate criteria for chatbot and recommendations (emails).
Downloads last month
2
Edit dataset card