File size: 1,373 Bytes
c000e96
 
 
 
 
 
 
 
 
27fcedb
1f7416b
27fcedb
1f7416b
27fcedb
1f7416b
27fcedb
1f7416b
27fcedb
1f7416b
27fcedb
 
 
1f7416b
27fcedb
1f7416b
27fcedb
 
1f7416b
27fcedb
1f7416b
27fcedb
1f7416b
27fcedb
 
1f7416b
27fcedb
 
 
1f7416b
27fcedb
 
 
 
 
1f7416b
27fcedb
 
c000e96
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
license: apache-2.0
language:
- fr
- wo
metrics:
- bleu
pipeline_tag: translation
---
# MarianMT French to Wolof Model

This model is a fine-tuned version of [Helsinki-NLP/opus-mt-fr-en](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) on the galsenai/french-wolof-translation dataset.

## Model Description

This MarianMT model has been fine-tuned for the task of translating text from French to Wolof. The dataset used for fine-tuning is available [here](https://huggingface.co/datasets/galsenai/french-wolof-translation).

## Training Procedure

- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Number of Epochs:** 3

## Evaluation Metrics

The model was evaluated using the BLEU metric:
- BLEU: 0.015657591430909903

## Usage

You can use this model directly with the Hugging Face `transformers` library:

```python
from transformers import MarianMTModel, MarianTokenizer

model_name = "cibfaye/french-wolof-marian-fr-to-wo"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)

def translate(text):
    inputs = tokenizer(text, return_tensors="pt")
    translated_tokens = model.generate(**inputs)
    translation = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
    return translation

text = "Bonjour, comment ça va ?"
translation = translate(text)
print("Translation:", translation)