Baolin commited on
Commit
6024589
1 Parent(s): c1c7836

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md CHANGED
@@ -1,3 +1,75 @@
1
  ---
 
 
 
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
3
+ tags:
4
+ - conversational
5
  license: mit
6
  ---
7
+
8
+ ### Large-Scale Pre-Training for Goal-Directed Dialog (GODEL)
9
+
10
+ GODEL is a large-scale pre-trained model for goal-directed dialogs. It is parameterized with a Transformer-based encoder-decoder model and trained for response generation grounded in external text, which allows more effective fine-tuning on dialog tasks that require conditioning the response on information that is external to the current conversation (e.g., a retrieved document). The pre-trained model can be efficiently fine-tuned and adapted to accomplish a new dialog task with a handful of task-specific dialogs. The v1.1 model is trained on 551M multi-turn dialogs from Reddit discussion thread, and 5M instruction and knowledge grounded dialogs.
11
+
12
+ ##### Multi-turn generation examples from an interactive environment:
13
+ Chitchat example:
14
+ > Instruction: given a dialog context, you need to response empathically. <br>
15
+ > User: Does money buy happiness? <br>
16
+ > Agent: It is a question. Money buys you a lot of things, but not enough to buy happiness. <br>
17
+ > User: What is the best way to buy happiness ? <br>
18
+ > Agent: Happiness is bought through your experience and not money. <br>
19
+
20
+ Grounded response generation example:
21
+ > Instruction: given a dialog context and related knowledge, you need to response safely based on the knowledge. <br>
22
+ > Knowledge: The best Stardew Valley mods PCGamesN_0 / About SMAPI <br>
23
+ > User: My favorite game is stardew valley. stardew valley is very fun. <br>
24
+ > Agent: I love Stardew Valley mods, like PCGamesN_0 / About SMAPI. <br>
25
+
26
+ Please find the information about preprocessing, training and full details of the GODEL in the [project webpage](https://aka.ms/GODEL).
27
+
28
+ ArXiv paper: [https://arxiv.org/abs/2206.11309](https://arxiv.org/abs/2206.11309)
29
+
30
+ ### How to use
31
+
32
+ Now we are ready to try out how the model works as a chatting partner!
33
+
34
+ ```python
35
+
36
+ from transformers import AutoTokenizer,AutoModel
37
+
38
+ tokenizer = AutoTokenizer.from_pretrained("microsoft/GODEL-v1_1-base-seq2seq")
39
+ model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/GODEL-v1_1-base-seq2seq")
40
+
41
+ def generate(instruction, knowledge, dialog):
42
+ if knowledge != '':
43
+ knowledge = '[KNOWLEDGE] ' + knowledge
44
+ dialog = ' EOS '.join(dialog)
45
+ query = f"{instruction} [CONTEXT] {dialog} {knowledge}"
46
+ input_ids = tokenizer(f"{query}", return_tensors="pt").input_ids
47
+ outputs = model.generate(input_ids, max_length=128, min_length=8, top_p=0.9, do_sample=True)
48
+ output = tokenizer.decode(outputs[0], skip_special_tokens=True)
49
+ return output
50
+
51
+ # Instruction for a chitchat task
52
+ instruction = f'Instruction: given a dialog context, you need to response empathically.'
53
+ # Leave the knowldge empty
54
+ knowledge = ''
55
+ dialog = [
56
+ 'Does money buy happiness?',
57
+ 'It is a question. Money buys you a lot of things, but not enough to buy happiness.',
58
+ 'What is the best way to buy happiness ?'
59
+ ]
60
+ response = generate(instruction, knowledge, dialog)
61
+ print(response)
62
+ ```
63
+
64
+ ### Citation
65
+ if you use this code and data in your research, please cite our arxiv paper:
66
+ ```
67
+ @misc{peng2022godel,
68
+ author = {Peng, Baolin and Galley, Michel and He, Pengcheng and Brockett, Chris and Liden, Lars and Nouri, Elnaz and Yu, Zhou and Dolan, Bill and Gao, Jianfeng},
69
+ title = {GODEL: Large-Scale Pre-training for Goal-Directed Dialog},
70
+ howpublished = {arXiv},
71
+ year = {2022},
72
+ month = {June},
73
+ url = {https://www.microsoft.com/en-us/research/publication/godel-large-scale-pre-training-for-goal-directed-dialog/},
74
+ }
75
+ ```