Text2Text Generation
Transformers
PyTorch
English
t5
text-generation-inference
Inference Endpoints
seungone commited on
Commit
2956d3f
1 Parent(s): d585ab6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +142 -0
README.md CHANGED
@@ -1,3 +1,145 @@
1
  ---
 
 
 
 
 
2
  license: apache-2.0
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - text2text-generation
4
+ datasets:
5
+ - CoT-Collection
6
+ - Flan-Collection
7
  license: apache-2.0
8
+ language:
9
+ - en
10
+ pipeline_tag: text2text-generation
11
  ---
12
+
13
+ # TL;DR
14
+
15
+ CoT-T5 is a language model using [Flan-T5](https://huggingface.co/google/flan-t5-xxl) as a base model, and CoT fine-tuned on 1.84 million rationales across 1,060 tasks from the [CoT Collection](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
16
+ Since it was CoT fine-tuned on a large amount of rationales, it shows superior performance with CoT compared to Flan-T5.
17
+ One could use CoT-T5 for (1) Solving unseen tasks in zero-shot setting, and (2) Adapting to new tasks with CoT fine-tuning.
18
+
19
+ # Model Details
20
+
21
+ ## Model Description
22
+
23
+ - **Model type:** Language model
24
+ - **Language(s) (NLP):** English
25
+ - **License:** Apache 2.0
26
+ - **Related Models:** [All CoT-T5 Checkpoints](https://huggingface.co/models?search=cot-t5)
27
+ - **Resources for more information:**
28
+ - [Research paper](https://arxiv.org/abs/2305.14045)
29
+ - [GitHub Repo](https://github.com/kaistAI/CoT-Collection)
30
+
31
+
32
+ CoT-T5 is trained with two different sizes (3B and 11B).
33
+ You could check the 3B sized LM on [this page](https://huggingface.co/kaist-ai/CoT-T5-3B).
34
+ Also, check out our dataset as well on [this page](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
35
+
36
+ ## License
37
+ CoT Collection and CoT-T5 is subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
38
+
39
+ # Usage
40
+
41
+ Find below some example scripts on how to use the model in `transformers`:
42
+
43
+ ## Using the Pytorch model
44
+
45
+ ### Running the model on a CPU
46
+
47
+ <details>
48
+ <summary> Click to expand </summary>
49
+
50
+ ```python
51
+
52
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
53
+
54
+ tokenizer = T5Tokenizer.from_pretrained("kaist-ai/CoT-T5-11B")
55
+ model = T5ForConditionalGeneration.from_pretrained("kaist-ai/CoT-T5-11B")
56
+
57
+ input_text = "Read the Directions and try to pick among A,B,C,D.\n\nDirecitons: A good way to figure out the relationship in a given question is to make up a sentence that describes the relationship between the first two words. Then, try to use the same sentence to find out which of the answer choices completes the same relationship with the third word.\nQuestion: Odometer is to mileage as compass is to?\nOptions: (A) speed, (B) hiking, (C) needle, (D) direction.\nLet's think step by step.\n"
58
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids
59
+
60
+ outputs = model.generate(input_ids)
61
+ print(tokenizer.decode(outputs[0]))
62
+ ```
63
+
64
+ </details>
65
+
66
+ ### Running the model on a GPU
67
+
68
+ <details>
69
+ <summary> Click to expand </summary>
70
+
71
+ ```python
72
+ # pip install accelerate
73
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
74
+
75
+ tokenizer = T5Tokenizer.from_pretrained("kaist-ai/CoT-T5-11B")
76
+ model = T5ForConditionalGeneration.from_pretrained("kaist-ai/CoT-T5-11B", device_map="auto")
77
+
78
+ input_text = "Read the Directions and try to pick among A,B,C,D.\n\nDirecitons: A good way to figure out the relationship in a given question is to make up a sentence that describes the relationship between the first two words. Then, try to use the same sentence to find out which of the answer choices completes the same relationship with the third word.\nQuestion: Odometer is to mileage as compass is to?\nOptions: (A) speed, (B) hiking, (C) needle, (D) direction.\nLet's think step by step.\n"
79
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
80
+
81
+ outputs = model.generate(input_ids)
82
+ print(tokenizer.decode(outputs[0]))
83
+ ```
84
+
85
+ </details>
86
+
87
+ ### Running the model on a GPU using different precisions
88
+
89
+ #### FP16
90
+
91
+ <details>
92
+ <summary> Click to expand </summary>
93
+
94
+ ```python
95
+ # pip install accelerate
96
+ import torch
97
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
98
+
99
+ tokenizer = T5Tokenizer.from_pretrained("kaist-ai/CoT-T5-11B")
100
+ model = T5ForConditionalGeneration.from_pretrained("kaist-ai/CoT-T5-11B", device_map="auto", torch_dtype=torch.float16)
101
+
102
+ input_text = "Read the Directions and try to pick among A,B,C,D.\n\nDirecitons: A good way to figure out the relationship in a given question is to make up a sentence that describes the relationship between the first two words. Then, try to use the same sentence to find out which of the answer choices completes the same relationship with the third word.\nQuestion: Odometer is to mileage as compass is to?\nOptions: (A) speed, (B) hiking, (C) needle, (D) direction.\nLet's think step by step.\n"
103
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
104
+
105
+ outputs = model.generate(input_ids)
106
+ print(tokenizer.decode(outputs[0]))
107
+ ```
108
+
109
+ </details>
110
+
111
+ #### INT8
112
+
113
+ <details>
114
+ <summary> Click to expand </summary>
115
+
116
+ ```python
117
+ # pip install bitsandbytes accelerate
118
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
119
+
120
+ tokenizer = T5Tokenizer.from_pretrained("kaist-ai/CoT-T5-11B")
121
+ model = T5ForConditionalGeneration.from_pretrained("kaist-ai/CoT-T5-11B", device_map="auto", load_in_8bit=True)
122
+
123
+ input_text = "Read the Directions and try to pick among A,B,C,D.\n\nDirecitons: A good way to figure out the relationship in a given question is to make up a sentence that describes the relationship between the first two words. Then, try to use the same sentence to find out which of the answer choices completes the same relationship with the third word.\nQuestion: Odometer is to mileage as compass is to?\nOptions: (A) speed, (B) hiking, (C) needle, (D) direction.\nLet's think step by step.\n"
124
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
125
+
126
+ outputs = model.generate(input_ids)
127
+ print(tokenizer.decode(outputs[0]))
128
+ ```
129
+
130
+ </details>
131
+
132
+ # Citation
133
+
134
+ If you find the
135
+
136
+ **BibTeX:**
137
+
138
+ ```bibtex
139
+ @article{kim2023cot,
140
+ title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning},
141
+ author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon},
142
+ journal={arXiv preprint arXiv:2305.14045},
143
+ year={2023}
144
+ }
145
+ ```