navjordj commited on
Commit
0f0588d
1 Parent(s): 254b6e8

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -25
README.md CHANGED
@@ -1,30 +1,12 @@
1
  ---
2
- language:
3
- - en
4
- - 'no'
5
  license: apache-2.0
6
  tags:
7
  - generated_from_trainer
8
  datasets:
9
  - bible_para
10
- metrics:
11
- - bleu
12
  model-index:
13
  - name: flan-t5-large_en-no
14
- results:
15
- - task:
16
- name: Translation
17
- type: translation
18
- dataset:
19
- name: bible_para en-no
20
- type: bible_para
21
- config: en-no
22
- split: train
23
- args: en-no
24
- metrics:
25
- - name: Bleu
26
- type: bleu
27
- value: 34.2122
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,11 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  # flan-t5-large_en-no
34
 
35
- This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the bible_para en-no dataset.
36
- It achieves the following results on the evaluation set:
37
- - Loss: 0.7058
38
- - Bleu: 34.2122
39
- - Gen Len: 65.0263
40
 
41
  ## Model description
42
 
@@ -61,7 +39,7 @@ The following hyperparameters were used during training:
61
  - seed: 42
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
- - num_epochs: 3.0
65
 
66
  ### Training results
67
 
1
  ---
 
 
 
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
  - bible_para
 
 
7
  model-index:
8
  - name: flan-t5-large_en-no
9
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
 
15
  # flan-t5-large_en-no
16
 
17
+ This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the bible_para dataset.
 
 
 
 
18
 
19
  ## Model description
20
 
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
+ - num_epochs: 5.0
43
 
44
  ### Training results
45