patrickvonplaten commited on
Commit
89a3ccc
2 Parent(s): 0f587aa f5a84d7

Merge branch 'main' of https://huggingface.co/google/byt5-xl into main

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: multilingual
3
+ datasets:
4
+ - mc4
5
+
6
+ license: apache-2.0
7
+ ---
8
+
9
+ # ByT5 - xl
10
+
11
+ ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-xl).
12
+
13
+ ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
14
+
15
+ ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-xl` significantly outperforms [mt5-xl](https://huggingface.co/google/mt5-xl) on [TweetQA](https://arxiv.org/abs/1907.06292).
16
+
17
+ Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/pdf/1910.10683.pdf)
18
+
19
+ Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
20
+
21
+ ## Example Inference
22
+
23
+ ByT5 works on raw UTF-8 bytes and can be used without a tokenizer:
24
+
25
+ ```python
26
+ from transformers import T5ForConditionalGeneration
27
+ import torch
28
+
29
+ model = T5ForConditionalGeneration.from_pretrained('google/byt5-xl')
30
+
31
+ input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens
32
+ labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens
33
+
34
+ loss = model(input_ids, labels=labels).loss # forward pass
35
+ ```
36
+
37
+ For batched inference & training it is however recommended using a tokenizer class for padding:
38
+
39
+ ```python
40
+ from transformers import T5ForConditionalGeneration, AutoTokenizer
41
+
42
+ model = T5ForConditionalGeneration.from_pretrained('google/byt5-xl')
43
+ tokenizer = AutoTokenizer.from_pretrained('google/byt5-xl')
44
+
45
+ model_inputs = tokenizer(["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt")
46
+ labels = tokenizer(["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt").input_ids
47
+
48
+ loss = model(**model_inputs, labels=labels).loss # forward pass
49
+ ```
50
+
51
+ ## Abstract
52
+
53
+ Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.
54
+
55
+ ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/ByT5.png)