cibfaye commited on
Commit
27fcedb
1 Parent(s): 8be6393

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +26 -48
README.md CHANGED
@@ -1,63 +1,41 @@
1
- ---
2
- license: apache-2.0
3
- base_model: Helsinki-NLP/opus-mt-fr-en
4
- tags:
5
- - generated_from_trainer
6
- metrics:
7
- - bleu
8
- model-index:
9
- - name: french-wolof-marian-fr-to-wo
10
- results: []
11
- ---
12
 
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
 
16
- # french-wolof-marian-fr-to-wo
17
 
18
- This model is a fine-tuned version of [Helsinki-NLP/opus-mt-fr-en](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) on an unknown dataset.
19
- It achieves the following results on the evaluation set:
20
- - Loss: 1.4610
21
- - Bleu: 0.0157
22
 
23
- ## Model description
24
 
25
- More information needed
26
 
27
- ## Intended uses & limitations
 
 
28
 
29
- More information needed
30
 
31
- ## Training and evaluation data
 
32
 
33
- More information needed
34
 
35
- ## Training procedure
36
 
37
- ### Training hyperparameters
 
38
 
39
- The following hyperparameters were used during training:
40
- - learning_rate: 2e-05
41
- - train_batch_size: 16
42
- - eval_batch_size: 16
43
- - seed: 42
44
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
- - lr_scheduler_type: linear
46
- - num_epochs: 3
47
- - mixed_precision_training: Native AMP
48
 
49
- ### Training results
 
 
 
 
50
 
51
- | Training Loss | Epoch | Step | Validation Loss | Bleu |
52
- |:-------------:|:-----:|:----:|:---------------:|:------:|
53
- | No log | 1.0 | 454 | 1.5968 | 0.0050 |
54
- | 1.9266 | 2.0 | 908 | 1.4846 | 0.0144 |
55
- | 1.5498 | 3.0 | 1362 | 1.4572 | 0.0161 |
56
 
57
-
58
- ### Framework versions
59
-
60
- - Transformers 4.40.2
61
- - Pytorch 2.3.0+cu121
62
- - Datasets 2.19.1
63
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
+ # MarianMT French to Wolof Model
 
3
 
4
+ This model is a fine-tuned version of [Helsinki-NLP/opus-mt-fr-en](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) on the galsenai/french-wolof-translation dataset.
5
 
6
+ ## Model Description
 
 
 
7
 
8
+ This MarianMT model has been fine-tuned for the task of translating text from French to Wolof. The dataset used for fine-tuning is available [here](https://huggingface.co/datasets/galsenai/french-wolof-translation).
9
 
10
+ ## Training Procedure
11
 
12
+ - **Learning Rate:** 2e-5
13
+ - **Batch Size:** 16
14
+ - **Number of Epochs:** 3
15
 
16
+ ## Evaluation Metrics
17
 
18
+ The model was evaluated using the BLEU metric:
19
+ - BLEU: 0.015657591430909903
20
 
21
+ ## Usage
22
 
23
+ You can use this model directly with the Hugging Face `transformers` library:
24
 
25
+ ```python
26
+ from transformers import MarianMTModel, MarianTokenizer
27
 
28
+ model_name = "cibfaye/french-wolof-marian-fr-to-wo"
29
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
30
+ model = MarianMTModel.from_pretrained(model_name)
 
 
 
 
 
 
31
 
32
+ def translate(text):
33
+ inputs = tokenizer(text, return_tensors="pt")
34
+ translated_tokens = model.generate(**inputs)
35
+ translation = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
36
+ return translation
37
 
38
+ text = "Bonjour, comment ça va ?"
39
+ translation = translate(text)
40
+ print("Translation:", translation)
 
 
41