AbhirupGhosh commited on
Commit
223b70d
1 Parent(s): 463e461

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -11
README.md CHANGED
@@ -9,22 +9,14 @@ model-index:
9
  results: []
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
13
- probably proofread and complete it, then remove this comment. -->
14
-
15
  # opus-mt-finetuned-hi-en
16
 
17
- This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on an unknown dataset.
18
- It achieves the following results on the evaluation set:
19
 
20
 
21
  ## Model description
22
 
23
- More information needed
24
-
25
- ## Intended uses & limitations
26
-
27
- More information needed
28
 
29
  ## Training and evaluation data
30
 
@@ -32,10 +24,12 @@ More information needed
32
 
33
  ## Training procedure
34
 
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - optimizer: None
39
  - training_precision: float32
40
 
41
  ### Training results
 
9
  results: []
10
  ---
11
 
 
 
 
12
  # opus-mt-finetuned-hi-en
13
 
14
+ This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on [HindiEnglish Corpora](https://www.clarin.eu/resource-families/parallel-corpora)
 
15
 
16
 
17
  ## Model description
18
 
19
+ The model is a transformer model similar to the [Transformer](https://arxiv.org/abs/1706.03762?context=cs) as defined in Attention Is All You Need et al
 
 
 
 
20
 
21
  ## Training and evaluation data
22
 
 
24
 
25
  ## Training procedure
26
 
27
+ The model was trained on 2 NVIDIA_TESLA_A100 GPU's on Google's vertex AI platform.
28
+
29
  ### Training hyperparameters
30
 
31
  The following hyperparameters were used during training:
32
+ - optimizer: AdamWeightDecay
33
  - training_precision: float32
34
 
35
  ### Training results