ToddGoldfarb commited on
Commit
f6bf2e0
1 Parent(s): 3dc2793

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -3
README.md CHANGED
@@ -1,3 +1,141 @@
1
- ---
2
- license: openrail
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## What is Cadet-Tiny?
2
+
3
+ Inspired by Allen AI's **Cosmo-XL**, **Cadet-Tiny** is a _very small_ conversational model trained off of the **SODA** dataset. **Cadet-Tiny** is intended for inference at the edge (on something as small as a 2GB RAM Raspberry Pi).
4
+
5
+ **Cadet-Tiny** is trained off of the **t5-small** pretrained model from Google, and is, as a result, is about 2% of the size of the **Cosmo-3B** model.
6
+
7
+ This is my first SEQ2SEQ NLP Model I've ever made! I'm very excited to share it here on HuggingFace! :)
8
+
9
+ If you have any questions, or any comments on improvements, please contact me at: **tcgoldfarb@gmail.com**
10
+
11
+
12
+
13
+ ## Google Colab Link
14
+
15
+ Here is the link to the Google Colab file, where I walk through the process of training the model and using the SODA public dataset from AI2.
16
+
17
+ https://colab.research.google.com/drive/1cx3Yujr_jGQkseqzXZW-2L0vEyEjds_s?usp=sharing
18
+
19
+ ## Get Started With Cadet-Tiny
20
+
21
+ Use the code snippet below to get started with Cadet-Tiny!
22
+
23
+ ```
24
+ import torch
25
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
26
+ import colorful as cf
27
+
28
+ cf.use_true_colors()
29
+ cf.use_style('monokai')
30
+ class CadetTinyAgent:
31
+ def __init__(self):
32
+ print(cf.bold | cf.purple("Waking up PIP..."))
33
+ self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
34
+ self.tokenizer = AutoTokenizer.from_pretrained("ToddGoldfarb/Cadet-Tiny", model_max_length=512)
35
+ self.model = AutoModelForSeq2SeqLM.from_pretrained("ToddGoldfarb/Cadet-Tiny", low_cpu_mem_usage=True).to(self.device)
36
+ self.conversation_history = ""
37
+
38
+ def observe(self, observation):
39
+ self.conversation_history = self.conversation_history + observation
40
+ # The number 400 below is just a truncation safety net. It leaves room for 112 input tokens.
41
+ if len(self.conversation_history) > 400:
42
+ self.conversation_history = self.conversation_history[112:]
43
+
44
+ def set_input(self, situation_narrative="", role_instruction=""):
45
+ input_text = "dialogue: "
46
+
47
+ if situation_narrative != "":
48
+ input_text = input_text + situation_narrative
49
+
50
+ if role_instruction != "":
51
+ input_text = input_text + " <SEP> " + role_instruction
52
+
53
+ input_text = input_text + " <TURN> " + self.conversation_history
54
+
55
+ # Uncomment the line below to see what is fed to the model.
56
+ # print(input_text)
57
+
58
+ return input_text
59
+
60
+ def generate(self, situation_narrative, role_instruction, user_response):
61
+ user_response = user_response + " <TURN> "
62
+ self.observe(user_response)
63
+
64
+ input_text = self.set_input(situation_narrative, role_instruction)
65
+
66
+ inputs = self.tokenizer([input_text], return_tensors="pt").to(self.device)
67
+
68
+ # I encourage you to change the hyperparameters of the model! Start by trying to modify the temperature.
69
+ outputs = self.model.generate(inputs["input_ids"], max_new_tokens=512, temperature=1, top_p=.95,
70
+ do_sample=True)
71
+ cadet_response = self.tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
72
+ added_turn = cadet_response + " <TURN> "
73
+ self.observe(added_turn)
74
+
75
+ return cadet_response
76
+
77
+ def reset_history(self):
78
+ self.conversation_history = []
79
+
80
+ def run(self):
81
+ def get_valid_input(prompt, default):
82
+ while True:
83
+ user_input = input(prompt)
84
+ if user_input in ["Y", "N", "y", "n"]:
85
+ return user_input
86
+ if user_input == "":
87
+ return default
88
+
89
+ while True:
90
+ continue_chat = ""
91
+
92
+ # MODIFY THESE STRINGS TO YOUR LIKING :)
93
+ situation_narrative = "Imagine you are Cadet-Tiny talking to ???."
94
+ role_instruction = "You are Cadet-Tiny, and you are talking to ???."
95
+
96
+ self.chat(situation_narrative, role_instruction)
97
+ continue_chat = get_valid_input(cf.purple("Start a new conversation with new setup? [Y/N]:"), "Y")
98
+ if continue_chat in ["N", "n"]:
99
+ break
100
+
101
+ print(cf.blue("CT: See you!"))
102
+
103
+ def chat(self, situation_narrative, role_instruction):
104
+ print(cf.green(
105
+ "Cadet-Tiny is running! Input [RESET] to reset the conversation history and [END] to end the conversation."))
106
+ while True:
107
+ user_input = input("You: ")
108
+ if user_input == "[RESET]":
109
+ self.reset_history()
110
+ print(cf.green("[Conversation history cleared. Chat with Cadet-Tiny!]"))
111
+ continue
112
+ if user_input == "[END]":
113
+ break
114
+ response = self.generate(situation_narrative, role_instruction, user_input)
115
+ print(cf.blue("CT: " + response))
116
+
117
+
118
+ def main():
119
+ print(cf.bold | cf.blue("LOADING MODEL"))
120
+
121
+ CadetTiny = CadetTinyAgent()
122
+ CadetTiny.run()
123
+
124
+
125
+ if __name__ == '__main__':
126
+ main()
127
+ ```
128
+
129
+ ## Citations and Special Thanks
130
+ Special thanks to Hyunwoo Kim for discussing with me the best way to use the SODA dataset. If you haven't looked into their work with SODA, Prosocial-Dialog, or COSMO, I recommend you do so! As well, read the paper on SODA!
131
+ The article is listed below.
132
+
133
+ ```
134
+ @article{kim2022soda,
135
+ title={SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization},
136
+ author={Hyunwoo Kim and Jack Hessel and Liwei Jiang and Peter West and Ximing Lu and Youngjae Yu and Pei Zhou and Ronan Le Bras and Malihe Alikhani and Gunhee Kim and Maarten Sap and Yejin Choi},
137
+ journal={ArXiv},
138
+ year={2022},
139
+ volume={abs/2212.10465}
140
+ }
141
+ ```