anzorq's picture
Update README.md
420f041
|
raw
history blame
2.02 kB
---
language:
- ru
- kbd
license: mit
base_model: facebook/m2m100_1.2B
tags:
- generated_from_trainer
datasets:
- anzorq/ru-kbd
model-index:
- name: m2m100_1.2B_ft_ru-kbd_50K
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M_ft_ru-kbd_50K
This model is a fine-tuned version of [facebook/m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B) on the [anzorq/ru-kbd](https://huggingface.co/datasets/anzorq/ru-kbd) dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Eval
```
predict_bleu = 23.3736
predict_gen_len = 16.8114
predict_loss = 0.9729
predict_runtime = 0:03:29.00
predict_samples = 1034
predict_samples_per_second = 4.947
predict_steps_per_second = 0.211
```
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
### Inference
```bash
pip install transformers sentencepiece
```
```Python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_path = "anzorq/m2m100_1.2B_ft_ru-kbd_50K"
tgt_lang="zu"
tokenizer = AutoTokenizer.from_pretrained('facebook/m2m100_1.2B')
model = AutoModelForSeq2SeqLM.from_pretrained(model_path)
model.to('cuda')
def translate(text, num_beams=4, num_return_sequences=4):
inputs = tokenizer(text, return_tensors="pt")
inputs.to('cuda')
num_return_sequences = min(num_return_sequences, num_beams)
translated_tokens = model.generate(
**inputs,
forced_bos_token_id=tokenizer.lang_code_to_id[tgt_lang],
num_beams=num_beams,
num_return_sequences=num_return_sequences
)
translations = [tokenizer.decode(translation, skip_special_tokens=True) for translation in translated_tokens]
return translations
```