Text2Text Generation
Transformers
PyTorch
Safetensors
Spanish
bart
text-generation-inference
Inference Endpoints
vgaraujov commited on
Commit
4baaf89
1 Parent(s): ea69267

Include some details

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -6,6 +6,11 @@ datasets:
6
  - large_spanish_corpus
7
  - bertin-project/mc4-es-sampled
8
  - oscar-corpus/OSCAR-2109
 
 
 
 
 
9
  ---
10
 
11
  # BARTO (base-sized model)
@@ -18,10 +23,12 @@ BARTO is a BART-based model (transformer encoder-decoder) with a bidirectional (
18
 
19
  BARTO is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
20
 
21
- ## Intended uses
22
 
23
  You can use the raw model for text infilling. However, the model is mainly meant to be fine-tuned on a supervised dataset.
24
 
 
 
25
  ### How to use
26
 
27
  Here is how to use this model in PyTorch:
 
6
  - large_spanish_corpus
7
  - bertin-project/mc4-es-sampled
8
  - oscar-corpus/OSCAR-2109
9
+ tags:
10
+ - text-generation-inference
11
+ widget:
12
+ - text: Quito es la capita de <mask>
13
+ example_title: "Text infilling"
14
  ---
15
 
16
  # BARTO (base-sized model)
 
23
 
24
  BARTO is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
25
 
26
+ ## Intended uses & limitations
27
 
28
  You can use the raw model for text infilling. However, the model is mainly meant to be fine-tuned on a supervised dataset.
29
 
30
+ This model does not have a slow tokenizer (BartTokenizer).
31
+
32
  ### How to use
33
 
34
  Here is how to use this model in PyTorch: