jshin49 commited on
Commit
98c6fe6
1 Parent(s): 31c2f3e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -2
README.md CHANGED
@@ -1,6 +1,28 @@
1
- Pre-trained t5-large on SAMSuM Dialogue Summarization corpus.
2
 
3
- Used the following prompt
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  ```
6
  Summarize this dialogue:
 
1
+ We pre-trained `t5-large` on SAMSum Dialogue Summarization corpus.
2
 
3
+ If you use this work for your research, please cite our work [Dialogue Summaries as Dialogue States ({DS}2), Template-Guided Summarization for Few-shot Dialogue State Tracking](https://arxiv.org/abs/2203.01552)
4
+
5
+ ### Citation
6
+ ```
7
+ @inproceedings{shin-etal-2022-dialogue,
8
+ title = "Dialogue Summaries as Dialogue States ({DS}2), Template-Guided Summarization for Few-shot Dialogue State Tracking",
9
+ author = "Shin, Jamin and
10
+ Yu, Hangyeol and
11
+ Moon, Hyeongdon and
12
+ Madotto, Andrea and
13
+ Park, Juneyoung",
14
+ booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
15
+ month = may,
16
+ year = "2022",
17
+ address = "Dublin, Ireland",
18
+ publisher = "Association for Computational Linguistics",
19
+ url = "https://aclanthology.org/2022.findings-acl.302",
20
+ pages = "3824--3846",
21
+ abstract = "Annotating task-oriented dialogues is notorious for the expensive and difficult data collection process. Few-shot dialogue state tracking (DST) is a realistic solution to this problem. In this paper, we hypothesize that dialogue summaries are essentially unstructured dialogue states; hence, we propose to reformulate dialogue state tracking as a dialogue summarization problem. To elaborate, we train a text-to-text language model with synthetic template-based dialogue summaries, generated by a set of rules from the dialogue states. Then, the dialogue states can be recovered by inversely applying the summary generation rules. We empirically show that our method DS2 outperforms previous works on few-shot DST in MultiWoZ 2.0 and 2.1, in both cross-domain and multi-domain settings. Our method also exhibits vast speedup during both training and inference as it can generate all states at once.Finally, based on our analysis, we discover that the naturalness of the summary templates plays a key role for successful training.",
22
+ }
23
+ ```
24
+
25
+ We used the following prompt for training
26
 
27
  ```
28
  Summarize this dialogue: