nazneen commited on
Commit
2aaa209
1 Parent(s): 6887902

model documentation

Browse files
Files changed (1) hide show
  1. README.md +184 -0
README.md ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - text-2-text-generation
4
+ - t5
5
+ ---
6
+
7
+ # Model Card for t5_sentence_paraphraser
8
+
9
+ # Model Details
10
+
11
+ ## Model Description
12
+
13
+
14
+ Using this model you can generate paraphrases of any given question.
15
+
16
+
17
+
18
+ - **Developed by:** Ramsri Goutham Golla
19
+ - **Shared by [Optional]:** Ramsri Goutham Golla
20
+ - **Model type:** Text2Text Generation
21
+ - **Language(s) (NLP):** More information needed
22
+ - **License:** More information needed
23
+ - **Parent Model:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
24
+ - **Resources for more information:**
25
+ - [GitHub Repo](https://github.com/ramsrigouthamg/Paraphrase-any-question-with-T5-Text-To-Text-Transfer-Transformer-)
26
+ - [Blog Post](https://towardsdatascience.com/paraphrase-any-question-with-t5-text-to-text-transfer-transformer-pretrained-model-and-cbb9e35f1555)
27
+ # Uses
28
+
29
+
30
+ ## Direct Use
31
+ This model can be used for the task of Text2Text Generation.
32
+
33
+
34
+ ## Downstream Use [Optional]
35
+
36
+ More information needed.
37
+
38
+ ## Out-of-Scope Use
39
+
40
+ The model should not be used to intentionally create hostile or alienating environments for people.
41
+
42
+ # Bias, Risks, and Limitations
43
+
44
+
45
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
46
+
47
+
48
+
49
+ ## Recommendations
50
+
51
+
52
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
53
+
54
+ # Training Details
55
+
56
+ ## Training Data
57
+
58
+ The developers also write in a [blog post](https://towardsdatascience.com/paraphrase-any-question-with-t5-text-to-text-transfer-transformer-pretrained-model-and-cbb9e35f1555) that the model:
59
+
60
+ > [Quora Question Pairs](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset to collect all the questions marked as **duplicates** and prepared training and validation sets. Questions that are duplicates serve our purpose of getting **paraphrase** pairs.
61
+
62
+
63
+
64
+ ## Training Procedure
65
+
66
+ The developers also write in a [blog post](https://towardsdatascience.com/paraphrase-any-question-with-t5-text-to-text-transfer-transformer-pretrained-model-and-cbb9e35f1555) that the model:
67
+
68
+ > I trained T5 with the **original sentence** as **input** and **paraphrased** (duplicate sentence from Quora Question pairs) sentence as **output**.
69
+
70
+ ### Preprocessing
71
+
72
+ More information needed
73
+
74
+
75
+
76
+ ### Speeds, Sizes, Times
77
+
78
+ More information needed
79
+
80
+
81
+
82
+ # Evaluation
83
+
84
+
85
+ ## Testing Data, Factors & Metrics
86
+
87
+ ### Testing Data
88
+
89
+ More information needed
90
+
91
+ ### Factors
92
+ More information needed
93
+
94
+ ### Metrics
95
+
96
+ More information needed
97
+
98
+
99
+ ## Results
100
+
101
+ More information needed
102
+
103
+
104
+ # Model Examination
105
+
106
+ More information needed
107
+
108
+ # Environmental Impact
109
+
110
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
111
+
112
+ - **Hardware Type:** p2.xlarge
113
+ - **Hours used:** ~20 hrs
114
+ - **Cloud Provider:** AWS ec2
115
+ - **Compute Region:** More information needed
116
+ - **Carbon Emitted:** More information needed
117
+
118
+ # Technical Specifications [optional]
119
+
120
+ ## Model Architecture and Objective
121
+
122
+ More information needed
123
+
124
+ ## Compute Infrastructure
125
+
126
+ More information needed
127
+
128
+ ### Hardware
129
+
130
+
131
+ More information needed
132
+
133
+ ### Software
134
+
135
+ More information needed.
136
+
137
+ # Citation
138
+
139
+
140
+ **BibTeX:**
141
+ More information needed
142
+
143
+
144
+
145
+
146
+
147
+ **APA:**
148
+
149
+ More information needed
150
+
151
+ # Glossary [optional]
152
+
153
+ More information needed
154
+
155
+ # More Information [optional]
156
+ More information needed
157
+
158
+ # Model Card Authors [optional]
159
+
160
+ Ramsri Goutham Golla in collaboration with Ezi Ozoani and the Hugging Face team
161
+
162
+ # Model Card Contact
163
+
164
+ More information needed
165
+
166
+ # How to Get Started with the Model
167
+
168
+ Use the code below to get started with the model.
169
+
170
+ <details>
171
+ <summary> Click to expand </summary>
172
+
173
+ ```python
174
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
175
+
176
+ tokenizer = AutoTokenizer.from_pretrained("ramsrigouthamg/t5_sentence_paraphraser")
177
+
178
+ model = AutoModelForSeq2SeqLM.from_pretrained("ramsrigouthamg/t5_sentence_paraphraser")
179
+ ```
180
+
181
+ See the [blog post](https://towardsdatascience.com/paraphrase-any-question-with-t5-text-to-text-transfer-transformer-pretrained-model-and-cbb9e35f1555) and this [Colab Notebook](https://colab.research.google.com/drive/176NSaYjc2eeI-78oLH_F9-YV3po3qQQO?usp=sharing#scrollTo=SDVQ04fGRb1v) for more examples.
182
+ </details>
183
+
184
+