File size: 2,062 Bytes
288573f
 
c207153
 
75e6a99
 
 
 
e355542
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c207153
 
 
0f5ee37
 
 
288573f
c207153
7d3e656
c207153
7d3e656
 
 
 
 
daf0497
75e6a99
 
c207153
75e6a99
7504c16
75e6a99
 
 
 
 
 
 
 
 
 
 
 
 
e355542
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
license: apache-2.0
language:
- en
- pt
- es
- de
- it
- ru
- fr
- pl
- nl
- vi
- tr
- sv
- id
- ro
- cs
- zh
- hu
- ja
- th
- fi
- fa
- uk
- da
- el
- 'no'
- bg
- sk
- ko
- ar
- lt
- ca
- sl
- he
- et
- lv
- hi
- sq
- af
library_name: transformers
tags:
- text-generation-inference
datasets:
- allenai/MADLAD-400
pipeline_tag: translation
---

T5ForConditionalGeneration files for Google's [Madlad-400](https://github.com/google-research/google-research/tree/master/madlad_400) 3B parameter MT model.

Article: [MADLAD-400: A Multilingual And Document-Level Large Audited Dataset](https://arxiv.org/abs/2309.04662)

Abstract:

> We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.

```python
from transformers import T5ForConditionalGeneration, T5Tokenizer, GenerationConfig

model = T5ForConditionalGeneration.from_pretrained('jbochi/madlad400-3b-mt')
tokenizer = T5Tokenizer.from_pretrained('jbochi/madlad400-3b-mt')

text = "<2es> how do you say torch in portuguese?"
input_ids = tokenizer(text, return_tensors="pt").input_ids
outputs = model.generate(
    input_ids=input_ids,
    generation_config=GenerationConfig(
        decoder_start_token_id=2,
))

tokenizer.decode(outputs[0], skip_special_tokens=True)
# como se dice antorcha en portugués?
```

Colab to generate these files is [here](https://colab.research.google.com/drive/1rZ2NRyl2zwmg0sQ2Wi-uZZF48iVYulTC#scrollTo=pVODoE6gA9sw).