File size: 3,454 Bytes
80f4c2e c04a28d 8841ace c04a28d 80f4c2e 617fbfd 80f4c2e 8841ace 80f4c2e 0bfc907 c04a28d 80f4c2e 8841ace 88e9b01 8841ace c04a28d 80f4c2e 0bfc907 c04a28d 0bfc907 c04a28d 8841ace 80f4c2e 0bfc907 80f4c2e 0bfc907 80f4c2e 8841ace 80f4c2e 0bfc907 80f4c2e 0bfc907 c04a28d 0bfc907 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
language:
- zh
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
### HPLT MT release v1.0
This repository contains the translation model for Traditional Chinese-English trained with HPLT data only. The model is available in both Marian and Hugging Face formats.
### Model Info
* Source language: Traditional Chinese
* Target language: English
* Data: HPLT data only
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
* Cleaning: We used [OpusCleaner](https://github.com/hplt-project/OpusCleaner) with a set of basic rules. Details can be found in the filter files [here](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0/data/en-zh_hant/raw/v0).
You can check out our [deliverable report](https://hplt-project.org/HPLT_D5_1___Translation_models_for_select_language_pairs.pdf), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0), and [website](https://hplt-project.org) for more details.
### Usage
**Note** that for quality considerations, we recommend using `[HPLT/translate-zh_hant-en-v1.0-hplt_opus](https://huggingface.co/HPLT/translate-zh_hant-en-v1.0-hplt_opus)` instead of this model.
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format. We have also converted the model into the Hugging Face format so it is compatible with `transformers`.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.zh_hant-en.spm` from this repository.
#### Using transformers
We have also converted this model to the Hugging Face format and you can get started with the script below. **Note** that due a [known issue](https://github.com/huggingface/transformers/issues/26216) in weight conversion, the checkpoint cannot work with transformer versions <4.26 or >4.30. We tested and suggest `pip install transformers==4.28`.
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/translate-zh_hant-en-v1.0-hplt")
model = AutoModelForSeq2SeqLM.from_pretrained("HPLT/translate-zh_hant-en-v1.0-hplt")
inputs = ["Input goes here.", "Make sure the language is right."]
batch_tokenized = tokenizer(inputs, return_tensors="pt", padding=True)
model_output = model.generate(
**batch_tokenized, num_beams=6, max_new_tokens=512
)
batch_detokenized = tokenizer.batch_decode(
model_output,
skip_special_tokens=True,
)
print(batch_detokenized)
```
## Benchmarks
When decoded using Marian, the model has the following test scores.
| Test set | BLEU | chrF++ | COMET22 |
| -------------------------------------- | ---- | ----- | ----- |
| FLORES200 | 20.3 | 47.7 | 0.8182 |
| NTREX | 18.2 | 44.9 | 0.79 |
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
Brought to you by researchers from the University of Edinburgh and Charles University in Prague with support from the whole HPLT consortium.
|