Text2Text Generation
Transformers
PyTorch
English
t5
Inference Endpoints
text-generation-inference
machineteacher commited on
Commit
e3e7026
1 Parent(s): df45c2c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -62,13 +62,16 @@ Given an edit instruction and an original text, our model can generate the edite
62
 
63
  ![task_specs](https://huggingface.co/grammarly/coedit-xl/resolve/main/Screen%20Shot%202023-05-12%20at%203.36.37%20PM.png)
64
 
 
 
 
65
  ## Usage
66
  ```python
67
  from transformers import AutoTokenizer, T5ForConditionalGeneration
68
 
69
- tokenizer = AutoTokenizer.from_pretrained("grammarly/coedit-large")
70
- model = T5ForConditionalGeneration.from_pretrained("grammarly/coedit-large")
71
- input_text = 'Fix grammatical errors in this sentence: New kinds of vehicles will be invented with new technology than today.'
72
  input_ids = tokenizer(input_text, return_tensors="pt").input_ids
73
  outputs = model.generate(input_ids, max_length=256)
74
  edited_text = tokenizer.decode(outputs[0], skip_special_tokens=True)[0]
 
62
 
63
  ![task_specs](https://huggingface.co/grammarly/coedit-xl/resolve/main/Screen%20Shot%202023-05-12%20at%203.36.37%20PM.png)
64
 
65
+ This model can also perform edits on composite instructions, as shown below:
66
+ ![composite task_specs](https://huggingface.co/grammarly/coedit-xl-composite/resolve/main/composite_examples.png)
67
+
68
  ## Usage
69
  ```python
70
  from transformers import AutoTokenizer, T5ForConditionalGeneration
71
 
72
+ tokenizer = AutoTokenizer.from_pretrained("grammarly/coedit-xl-composite")
73
+ model = T5ForConditionalGeneration.from_pretrained("grammarly/coedit-xl-composite")
74
+ input_text = 'Fix grammatical errors in this sentence and make it simpler: New kinds of vehicles will be invented with new technology than today.'
75
  input_ids = tokenizer(input_text, return_tensors="pt").input_ids
76
  outputs = model.generate(input_ids, max_length=256)
77
  edited_text = tokenizer.decode(outputs[0], skip_special_tokens=True)[0]