system HF staff commited on
Commit
c4d7aac
1 Parent(s): b5f6aa3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -0
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - conversational
4
+ license: mit
5
+ ---
6
+
7
+ ## Finetuned DialoGPT model on Spanish Conversations
8
+
9
+ This model was finetuned from the original [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model on subtitles from Spanish movies and telenovelas from the awesome [OpenSubtitle dataset](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/opensubtitles).
10
+
11
+ DialoGPT paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
12
+
13
+ Some example dialog from the model:
14
+
15
+ |Role | Response |
16
+ |---------|--------|
17
+ |User | ¿Cómo está el clima hoy? |
18
+ | Bot | Muy bien. |
19
+ |User | Eso es bueno escuchar. ¿Vas a ir a la playa? |
20
+ | Bot | No, también puedo regresar. |
21
+ |User | Oh bien, bueno, iré a la playa más tarde. |
22
+ | Bot | ¡No! |
23
+ |User | ¿Qué está mal con eso? |
24
+ | Bot | ¡Cómo que no! |
25
+ |User | Estoy confundido, ¿por qué no puedo ir a la playa? |
26
+ | Bot | ¡Cómo que no! |
27
+ |User | Explícamelo por favor. |
28
+ | Bot | ¡No! |
29
+
30
+ ## Using the model
31
+
32
+ Example code for trying out the model (taken directly from the [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model card):
33
+ ```python
34
+ from transformers import AutoModelWithLMHead, AutoTokenizer
35
+ import torch
36
+
37
+
38
+ tokenizer = AutoTokenizer.from_pretrained("ncoop57/DiGPTame-medium")
39
+ model = AutoModelWithLMHead.from_pretrained("ncoop57/DiGPTame-medium")
40
+
41
+ # Let's chat for 5 lines
42
+ for step in range(5):
43
+ # encode the new user input, add the eos_token and return a tensor in Pytorch
44
+ new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
45
+
46
+ # append the new user input tokens to the chat history
47
+ bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
48
+
49
+ # generated a response while limiting the total chat history to 1000 tokens,
50
+ chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
51
+
52
+ # pretty print last ouput tokens from bot
53
+ print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
54
+ ```
55
+
56
+ ## Training your own model
57
+
58
+ If you would like to finetune your own model or finetune this Spanish model, please checkout my blog post on that exact topic!
59
+ https://nathancooper.io/i-am-a-nerd/chatbot/deep-learning/gpt2/2020/05/12/chatbot-part-1.html