system HF staff commited on
Commit
2153920
1 Parent(s): 68ff13b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DialogRPT-width
2
+
3
+ ### Dialog Ranking Pretrained Transformers
4
+
5
+ > How likely a dialog response is upvoted 👍 and/or gets replied 💬?
6
+
7
+ This is what [**DialogRPT**](https://github.com/golsun/DialogRPT) is learned to predict.
8
+ It is a set of dialog response ranking models proposed by [Microsoft Research NLP Group](https://www.microsoft.com/en-us/research/group/natural-language-processing/) trained on 100 + millions of human feedback data.
9
+ It can be used to improve existing dialog generation model (e.g., [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium)) by re-ranking the generated response candidates.
10
+
11
+ Quick Links:
12
+ * [EMNLP'20 Paper](https://arxiv.org/abs/2009.06978/)
13
+ * [Dataset, training, and evaluation](https://github.com/golsun/DialogRPT)
14
+ * [Colab Notebook Demo](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing)
15
+
16
+ We considered the following tasks and provided corresponding pretrained models.
17
+
18
+ |Task | Description | Pretrained model |
19
+ | :------------- | :----------- | :-----------: |
20
+ | **Human feedback** | **given a context and its two human responses, predict...**|
21
+ | `updown` | ... which gets more upvotes? | [model card](https://huggingface.co/microsoft/DialogRPT-updown) |
22
+ | `width`| ... which gets more direct replies? | [model card](https://huggingface.co/microsoft/DialogRPT-depth) |
23
+ | `depth`| ... which gets longer follow-up thread? | this model |
24
+ | **Human-like** (human vs fake) | **given a context and one human response, distinguish it with...** |
25
+ | `human_vs_rand`| ... a random human response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-rand) |
26
+ | `human_vs_machine`| ... a machine generated response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-machine) |
27
+
28
+
29
+
30
+ ### Examples:
31
+ The `depth` score predicts how likely the response is getting a long follow-up discussion thread.
32
+ Examples below can be reproduced with this [Colab Notebook](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing)
33
+
34
+ | Context | Response | `depth` score |
35
+ | :------ | :------- | :------------: |
36
+ | I love NLP! | Can anyone recommend a nice review paper? | 0.701 |
37
+ | I love NLP! | Me too! | 0.029 |
38
+
39
+ ### Contact:
40
+ Please create an issue on [our repo](https://github.com/golsun/DialogRPT)
41
+
42
+ ### Citation:
43
+ ```
44
+ @inproceedings{gao2020dialogrpt,
45
+ title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data},
46
+ author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan},
47
+ year={2020},
48
+ booktitle={EMNLP}
49
+ }
50
+ ```