--- license: mit widget: - text: "What is or could be the cause of target? target: Thanks. Will I be able to take a retest ? context: A: Did I do well on my test ?, B: Do you want to know the honest answer ?, A: Why wouldn't I want to know ?, B: You had pretty bad scores ., A: Exactly what do you mean by bad ?, B: You failed ., A: How'd I fail it ?, B: There are a couple of reasons why you didn't pass ., A: What did I do wrong ?, B: To sum it all up , you really just don't know how to drive ., A: Thanks. Will I be able to take a retest ?, B: Sure you can , in about two and a half weeks . " example_title: "Cause 1" - text: "What is or could be the cause of target? target: But she did and made me disappointed . context: A: David , why didn't you clean the room ?, B: I'm not in the mood ., A: Why are you feeling depressed ?, B: I was told my girlfriend was speaking ill of me. That's a real let-down ., A: I don t think she will do such a thing ., B: But she did and made me disappointed ., A: Oh , cheer up . A girlfriend is not everything ., B: But she means a lot to me ., A: Then forgive her mistake ., B: Oh . I just can't forget it " example_title: "Cause 2" - text: "What subsequent event happens or could happen following the target? target: Oh . I just can't forget it . context: A: David , why didn't you clean the room ?, B: I'm not in the mood ., A: Why are you feeling depressed ?, B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., A: I don t think she will do such a thing ., B: But she did and made me disappointed ., A: Oh , cheer up . A girlfriend is not everything ., B: But she means a lot to me ., A: Then forgive her mistake ., B: Oh . I just can't forget it " example_title: "Subsequent Event 1" - text: "What subsequent event happens or could happen following the target? target: Sure you can , in about two and a half weeks . context: A: Did I do well on my test ?, B: Do you want to know the honest answer ?, A: Why wouldn't I want to know ?, B: You had pretty bad scores ., A: Exactly what do you mean by bad ?, B: You failed ., A: How'd I fail it ?, B: There are a couple of reasons why you didn't pass ., A: What did I do wrong ?, B: To sum it all up , you really just don't know how to drive ., A: Thanks. Will I be able to take a retest ?, B: Sure you can , in about two and a half weeks . " example_title: "Subsequent Event 2" - text: "What is the possible emotional reaction of the listener in response to target? target: Oh . I just can't forget it . context: A: David , why didn't you clean the room ?, B: I'm not in the mood ., A: Why are you feeling depressed ?, B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., A: I don t think she will do such a thing ., B: But she did and made me disappointed ., A: Oh , cheer up . A girlfriend is not everything ., B: But she means a lot to me ., A: Then forgive her mistake ., B: Oh . I just can't forget it " example_title: "Emotional Reaction" - text: "What is or could be the motivation of target? target: Sure you can , in about two and a half weeks . context: A: Did I do well on my test ?, B: Do you want to know the honest answer ?, A: Why wouldn't I want to know ?, B: You had pretty bad scores ., A: Exactly what do you mean by bad ?, B: You failed ., A: How'd I fail it ?, B: There are a couple of reasons why you didn't pass ., A: What did I do wrong ?, B: To sum it all up , you really just don't know how to drive ., A: Thanks. Will I be able to take a retest ?, B: Sure you can , in about two and a half weeks . " example_title: "Motivation" --- ## DIALogue-level Commonsense Transformer (DIALeCT) The pretrained checkpoint for the paper [Multiview Contextual Commonsense Inference: A New Dataset and Task](https://arxiv.org/abs/2210.02890). The model is trained based on the [T5-large](https://huggingface.co/t5-large) checkpoint. ![model image](https://drive.google.com/uc?export=download&id=14RIbxgXhREdu5xZiKn5D-UUzaQLDNLqf) ## Datasets The dataset used to pretrain the model can be obtained from the [CICERO repo](https://github.com/declare-lab/CICERO) following instructions. The Contextualized Commonsense Inference in Dialogues v2 (CICEROv2) consists of annotated commonsense inferences including cause and emotional reaction, etc. The dialogues are from multiple datasets. | Dataset | #Dialogues| #Instances| | -------- | ----- | --------- | | DailyDialog| 1118| 3973| | MuTual| 1011 | 3384| | Dream| 250 | 994| ### Examples Some examples of generated results from the pretrained model (the zero-shot setting). **Subsequent Event** ``` What is or could be the subsequent event of the target? target: Oh . I just can't forget it . context: A: David , why didn't you clean the room ?, B: I'm not in the mood ., A: Why are you feeling depressed ?, B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., A: I don t think she will do such a thing ., B: But she did and made me disappointed ., A: Oh , cheer up . A girlfriend is not everything ., B: But she means a lot to me ., A: Then forgive her mistake ., B: Oh . I just can't forget it ``` Predicted subsequent event: ``` David's girlfriend apologized to david for her mistake. ``` **Cause** ``` What is or could be the cause of target? target: Thanks. Will I be able to take a retest ? context: A: Did I do well on my test ?, B: Do you want to know the honest answer ?, A: Why wouldn't I want to know ?, B: You had pretty bad scores ., A: Exactly what do you mean by bad ?, B: You failed ., A: How'd I fail it ?, B: There are a couple of reasons why you didn't pass ., A: What did I do wrong ?, B: To sum it all up , you really just don't know how to drive ., A: Thanks. Will I be able to take a retest ?, B: Sure you can , in about two and a half weeks . ``` Predicted cause: ``` The speaker has failed the driving test. ``` **Emotional Reaction** ``` What is the possible emotional reaction of the listener in response to target? target: Oh . I just can't forget it . context: A: David , why didn't you clean the room ?, B: I'm not in the mood ., A: Why are you feeling depressed ?, B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., A: I don t think she will do such a thing ., B: But she did and made me disappointed ., A: Oh , cheer up . A girlfriend is not everything ., B: But she means a lot to me ., A: Then forgive her mistake ., B: Oh . I just can't forget it ``` Predicted emotional reaction: ``` The listener is hopeful that david will forgive his girlfriend for her mistake. ``` ## Inference: The input text should be formatted as follows: ``` Question target: target_utt context: A: utterance 1 B: utterance 2 A: utterance 3 B: utterance 4 ``` Question: The question against which we want to make the inference. A, B are speaker identifiers The ```target_utt``` should be anyone between ```utterance 1, utterance 2, utterance 3, or utterance 4```. Do not use the speaker identifier in the ```target_utt``` Some samples are provided in the Hosted inference API box examples. ## BibTeX entry and citation info If you use the model, you can cite: ```bibtex @article{Shen2022MultiviewCC, title={Multiview Contextual Commonsense Inference: A New Dataset and Task}, author={Siqi Shen and Deepanway Ghosal and Navonil Majumder and Henry Lim and Rada Mihalcea and Soujanya Poria}, journal={ArXiv}, year={2022}, volume={abs/2210.02890} } ```