Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- de
|
6 |
+
- fr
|
7 |
+
- zh
|
8 |
+
- pt
|
9 |
+
- nl
|
10 |
+
- ru
|
11 |
+
- ko
|
12 |
+
- it
|
13 |
+
- es
|
14 |
+
metrics:
|
15 |
+
- comet
|
16 |
+
pipeline_tag: translation
|
17 |
+
---
|
18 |
+
# Model Card for TowerBase-13B-v0.1
|
19 |
+
|
20 |
+
## Model Details
|
21 |
+
|
22 |
+
### Model Description
|
23 |
+
|
24 |
+
TowerBase-13B is a language model that results from continuing the pretraining of Llama 2 on a mix of 20 billion tokens of monolingual data in ten different languages — English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian — and bilingual data. TowerBase-13B-v0.1 is the first model in the series.
|
25 |
+
The resulting model shows improved performance on the supported languages, while maintaining Llama 2's capabilities on English. It is particularly well-suited for fine-tuning on translation and related tasks: check out [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-13B-v0.1).
|
26 |
+
|
27 |
+
We will release more details in the upcoming technical report.
|
28 |
+
|
29 |
+
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
|
30 |
+
- **Model type:** A 13B parameter model built on top of Llama 2 by continuing pretraining on multilingual data.
|
31 |
+
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
|
32 |
+
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
|
33 |
+
|
34 |
+
## Intended uses & limitations
|
35 |
+
|
36 |
+
The model is intended for research purposes in the 10 languages it supports.
|
37 |
+
The model is able to perform well on translation and related tasks (e.g., APE, GEC) on a few-shot regime.
|
38 |
+
It can also be fine-tuned to perform these tasks in a zero-shot fashion (see [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-13B-v0.1), as well as other multilingual tasks.
|
39 |
+
|
40 |
+
### Out-of-Scope Use
|
41 |
+
|
42 |
+
The model is not guaranteed to perform well for languages other than the 10 languages it supports.
|
43 |
+
|
44 |
+
## Bias, Risks, and Limitations
|
45 |
+
|
46 |
+
TowerBase-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
|
47 |
+
|
48 |
+
## Run the model
|
49 |
+
|
50 |
+
```python
|
51 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
52 |
+
|
53 |
+
model_id = "Unbabel/TowerBase-13B-v0.1"
|
54 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
55 |
+
|
56 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
57 |
+
|
58 |
+
text = "English: My name is TowerBase.\nPortuguese:"
|
59 |
+
inputs = tokenizer(text, return_tensors="pt")
|
60 |
+
|
61 |
+
outputs = model.generate(**inputs, max_new_tokens=20)
|
62 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
63 |
+
```
|
64 |
+
|
65 |
+
### Training Data
|
66 |
+
|
67 |
+
Filtered versions of [mc4](https://huggingface.co/datasets/mc4) and bilingual data from various sources (e.g., [OPUS](https://opus.nlpl.eu/)).
|
68 |
+
|
69 |
+
## Citation
|
70 |
+
|
71 |
+
To be completed.
|