Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ Please find more details in our [paper](https://arxiv.org/abs/2309.11674).
|
|
16 |
primaryClass={cs.CL}
|
17 |
}
|
18 |
```
|
19 |
-
We release
|
20 |
- **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
|
21 |
- **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
|
22 |
- **ALMA-7B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-7B-LoRA with contrastive preference optimization.
|
|
|
16 |
primaryClass={cs.CL}
|
17 |
}
|
18 |
```
|
19 |
+
We release six translation models presented in the paper:
|
20 |
- **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
|
21 |
- **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
|
22 |
- **ALMA-7B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-7B-LoRA with contrastive preference optimization.
|