File size: 1,028 Bytes
ea42f9b
2de898d
 
 
5486345
 
a22ef3a
b9ee8ec
463e461
a22ef3a
 
 
 
ea42f9b
a22ef3a
 
 
223b70d
a22ef3a
 
 
 
223b70d
a22ef3a
 
 
 
 
 
 
223b70d
 
a22ef3a
 
 
223b70d
a22ef3a
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
language:
- hi
- en
- multilingual
license: apache-2.0
tags:
- translation
- Hindi
- generated_from_keras_callback
model-index:
- name: opus-mt-finetuned-hi-en
  results: []
---

# opus-mt-finetuned-hi-en

This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on [HindiEnglish Corpora](https://www.clarin.eu/resource-families/parallel-corpora)


## Model description

The model is a transformer model similar to the [Transformer](https://arxiv.org/abs/1706.03762?context=cs) as defined in Attention Is All You Need et al

## Training and evaluation data

More information needed

## Training procedure

The model was trained on 2 NVIDIA_TESLA_A100 GPU's on Google's vertex AI platform.

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: AdamWeightDecay
- training_precision: float32

### Training results



### Framework versions

- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1