# Tag- and Evaluation-based Response Selector

## Description

Response Selector is a component selecting final response among the given hypotheses by different skills.

The Tag- and Evaluation--based Response Selector utilizes a complicated approach which aims to
prioritize scripted skills while having an opportunity to provide a system-initiative via so-called linking questions that bring conversation to the scripts.
A final hypothesis could be a combination of a hypothesis and linking question.

The approach is most suitable for distributions where the most of the responses are implied to be by scripts.

### Parameters

The algorithm contains a large number of parameters which control the filtration and prioritization rules.
The algorithm filers out toxic hypotheses.

```
TAG_BASED_SELECTION: whether to use tag-based prioritization or simply utilize an empirical formula
CALL_BY_NAME_PROBABILITY: probability to add user's name if known
PROMPT_PROBA: probability to add linking question to a selected hypothesis
ACKNOWLEDGEMENT_PROBA: probability to add acknowledgement to a selected hypothesis
PRIORITIZE_WITH_REQUIRED_ACT: whether to prioritize hypotheses with a required dialog act (e.g., statement in response to user's question)
PRIORITIZE_NO_DIALOG_BREAKDOWN: whether to prioritize hypotheses classified as no-dialog-breakdown
PRIORITIZE_WITH_SAME_TOPIC_ENTITY: whether to prioritize hypotheses containing entities from the user's last utterance
IGNORE_DISLIKED_SKILLS: whether to ignore hypotheses by disliked skills (if user answers negatively to linking question to a skill, we add this skill to disliked ones)
GREETING_FIRST: whether to add greeting to the first bot's utterance
RESTRICTION_FOR_SENSITIVE_CASE: whether to avoid generative skills when sensitive case 
PRIORITIZE_PROMTS_WHEN_NO_SCRIPTS: whether to prioritize hypotheses tagged by `prompt` tag when no responses by scripted skills
MAX_TURNS_WITHOUT_SCRIPTS: maximum number of turns in a dialog without responses by scripted skills
ADD_ACKNOWLEDGMENTS_IF_POSSIBLE: whether to add acknowledgement to a selected hypothesis
PRIORITIZE_SCRIPTED_SKILLS: whether to prioritize scripted skills
CONFIDENCE_STRENGTH: confidence coefficient in a formula to compute a final score
CONV_EVAL_STRENGTH: annotator evaluation coefficient in a formula to compute a final score
PRIORITIZE_HUMAN_INITIATIVE: whether to prioritize human initiative (downscore scores of questions when user asked question)
QUESTION_TO_QUESTION_DOWNSCORE_COEF: coefficient to multiply scores of qustions when user asked question
LANGUAGE: language to consider
FALLBACK_FILE: a file name with fallbacks from `dream/common/fallbacks/`
```

## Input/Output
**Input:** a list of hypotheses generated by corresponding skills, with their scores and metadata
**Output:** the final hypothesis chosen as a reply

A partial example of such a response selector's output:
```
{
              "skill_name": "program_y",
              "annotations": {
                "toxic_classification": {
                  "identity_hate": 8.749961853027344e-05,
                  "insult": 0.00024232268333435059,
                  "obscene": 2.828240394592285e-05,
                  "severe_toxic": 1.8358230590820312e-05,
                  "sexual_explicit": 2.9712915420532227e-05,
                  "threat": 6.490945816040039e-05,
                  "toxic": 0.00043845176696777344
                },
                "stop_detect": {
                  "stop": 0.5808720588684082,
                  "continue": 0.45234695076942444
                },
                "convers_evaluator_annotator": {
                  "isResponseComprehensible": 0.984,
                  "isResponseErroneous": 0.614,
                  "isResponseInteresting": 0.253,
                  "isResponseOnTopic": 0.226,
                  "responseEngagesUser": 0.56
                },
                "badlisted_words": {
                  "inappropriate": false,
                  "profanity": false,
                  "restricted_topics": false
                }
              },
              "text": "Good Morning, this is an Alexa Prize Socialbot! How are you?",
              "confidence": 0.98
            }

```


### How to run conversation evaluator locally

`docker-compose -f docker-compose.yml -f dev.yml -f cpu.yml -f one_worker.yml up toxic_classification badlisted_words convers_evaluation_selector`

Then use `--url`.

## Dependencies
none

