Text2Text Generation
Transformers
PyTorch
Safetensors
English
t5
Inference Endpoints
text-generation-inference
machineteacher commited on
Commit
97178a3
1 Parent(s): e108204

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md CHANGED
@@ -1,3 +1,89 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - asset
5
+ - wi_locness
6
+ - GEM/wiki_auto_asset_turk
7
+ - discofuse
8
+ - zaemyung/IteraTeR_plus
9
+ language:
10
+ - en
11
+ metrics:
12
+ - sari
13
+ - bleu
14
+ - accuracy
15
  ---
16
+ # Model Card for CoEdIT-xxl
17
+
18
+ This model was obtained by fine-tuning the corresponding google/flan-t5-xxl model on the CoEdIT dataset.
19
+
20
+ Paper: CoEdIT: ext Editing by Task-Specific Instruction Tuning
21
+ Authors: Vipul Raheja, Dhruv Kumar, Ryan Koo, Dongyeop Kang
22
+
23
+ ## Model Details
24
+
25
+ ### Model Description
26
+
27
+ - **Language(s) (NLP)**: English
28
+ - **Finetuned from model:** google/flan-t5-xxl
29
+
30
+ ### Model Sources [optional]
31
+
32
+ - **Repository:** https://github.com/vipulraheja/coedit
33
+ - **Paper [optional]:** [More Information Needed]
34
+
35
+ ## How to use
36
+ We make available the models presented in our paper.
37
+
38
+ <table>
39
+ <tr>
40
+ <th>Model</th>
41
+ <th>Number of parameters</th>
42
+ </tr>
43
+ <tr>
44
+ <td>CoEdIT-large</td>
45
+ <td>770M</td>
46
+ </tr>
47
+ <tr>
48
+ <td>CoEdIT-xl</td>
49
+ <td>3B</td>
50
+ </tr>
51
+ <tr>
52
+ <td>CoEdIT-xxl</td>
53
+ <td>11B</td>
54
+ </tr>
55
+ </table>
56
+
57
+
58
+ ## Uses
59
+
60
+ ## Text Revision Task
61
+ Given an edit instruction and an original text, our model can generate the edited version of the text.<br>
62
+
63
+ ![task_specs](https://huggingface.co/grammarly/coedit-xl/resolve/main/Screen%20Shot%202023-05-12%20at%203.36.37%20PM.png)
64
+
65
+ ## Usage
66
+ ```python
67
+ from transformers import AutoTokenizer, T5ForConditionalGeneration
68
+
69
+ tokenizer = AutoTokenizer.from_pretrained("grammarly/coedit-xxl")
70
+ model = T5ForConditionalGeneration.from_pretrained("grammarly/coedit-xxl")
71
+ input_text = 'Fix grammatical errors in this sentence: New kinds of vehicles will be invented with new technology than today.'
72
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids
73
+ outputs = model.generate(input_ids, max_length=256)
74
+ edited_text = tokenizer.decode(outputs[0], skip_special_tokens=True)[0]
75
+ ```
76
+
77
+
78
+ #### Software
79
+ https://github.com/vipulraheja/coedit
80
+
81
+ ## Citation
82
+
83
+ **BibTeX:**
84
+
85
+ [More Information Needed]
86
+
87
+ **APA:**
88
+
89
+ [More Information Needed]