Text2Text Generation
Transformers
PyTorch
t5
codet5
text-generation-inference
nielsr HF staff commited on
Commit
152ae46
1 Parent(s): 5fe3699

Improve model card

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -21,7 +21,7 @@ From the abstract:
21
 
22
  ## Intended uses & limitations
23
 
24
- This repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as:
25
  * code summarization
26
  * code generation
27
  * code translation
@@ -34,7 +34,7 @@ See the [model hub](https://huggingface.co/models?search=salesforce/codet) to lo
34
 
35
  ### How to use
36
 
37
- Here is how to use this model:
38
 
39
  ```python
40
  from transformers import RobertaTokenizer, T5ForConditionalGeneration
@@ -103,7 +103,7 @@ The CodeT5 model was pretrained on CodeSearchNet [Husain et al., 2019](https://a
103
 
104
  ### Preprocessing
105
 
106
- This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository.
107
 
108
  ## Evaluation results
109
 
 
21
 
22
  ## Intended uses & limitations
23
 
24
+ This repository contains the pre-trained model only, so you can use this model for (among other tasks) masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as:
25
  * code summarization
26
  * code generation
27
  * code translation
 
34
 
35
  ### How to use
36
 
37
+ Here is how to use this model for masked span prediction:
38
 
39
  ```python
40
  from transformers import RobertaTokenizer, T5ForConditionalGeneration
 
103
 
104
  ### Preprocessing
105
 
106
+ This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer trained using the [HuggingFace Tokenizers](https://github.com/huggingface/tokenizers) library. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository.
107
 
108
  ## Evaluation results
109