julien-c HF staff commited on
Commit
76dc214
1 Parent(s): b4b263a

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/microsoft/DialoGPT-large/README.md

Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
3
+ tags:
4
+ - conversational
5
+ license: mit
6
+ ---
7
+
8
+ ## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
9
+
10
+ DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
11
+ The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
12
+ The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
13
+
14
+ * Multi-turn generation examples from an interactive environment:
15
+
16
+ |Role | Response |
17
+ |---------|--------|
18
+ |User | Does money buy happiness? |
19
+ | Bot | Depends how much money you spend on it .|
20
+ |User | What is the best way to buy happiness ? |
21
+ | Bot | You just have to be a millionaire by your early 20s, then you can be happy . |
22
+ |User |This is so difficult ! |
23
+ | Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money |
24
+
25
+ Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
26
+
27
+ ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
28
+
29
+ ### How to use
30
+
31
+ Now we are ready to try out how the model works as a chatting partner!
32
+
33
+ ```python
34
+ from transformers import AutoModelForCausalLM, AutoTokenizer
35
+ import torch
36
+
37
+
38
+ tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")
39
+ model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large")
40
+
41
+ # Let's chat for 5 lines
42
+ for step in range(5):
43
+ # encode the new user input, add the eos_token and return a tensor in Pytorch
44
+ new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
45
+
46
+ # append the new user input tokens to the chat history
47
+ bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
48
+
49
+ # generated a response while limiting the total chat history to 1000 tokens,
50
+ chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
51
+
52
+ # pretty print last ouput tokens from bot
53
+ print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
54
+ ```