MahmoudH commited on
Commit
a951986
1 Parent(s): 9a6e3f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -11
README.md CHANGED
@@ -3,20 +3,21 @@ license: apache-2.0
3
  tags:
4
  - generated_from_keras_callback
5
  model-index:
6
- - name: t5-v1_1-base-finetuned3
7
  results: []
 
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
  probably proofread and complete it, then remove this comment. -->
12
 
13
- # t5-v1_1-base-finetuned3
14
 
15
- This model is a fine-tuned version of [MahmoudH/t5-v1_1-base-finetuned2](https://huggingface.co/MahmoudH/t5-v1_1-base-finetuned2) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 1.6841
18
- - Validation Loss: 1.4443
19
- - Epoch: 1
20
 
21
  ## Model description
22
 
@@ -35,20 +36,23 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 22098, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
39
  - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
- | 2.0059 | 1.4861 | 0 |
46
- | 1.6841 | 1.4443 | 1 |
 
 
 
47
 
48
 
49
  ### Framework versions
50
 
51
  - Transformers 4.26.1
52
  - TensorFlow 2.11.0
53
- - Datasets 2.10.1
54
- - Tokenizers 0.13.2
 
3
  tags:
4
  - generated_from_keras_callback
5
  model-index:
6
+ - name: t5-v1_1-base-finetuned2
7
  results: []
8
+ pipeline_tag: summarization
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
12
  probably proofread and complete it, then remove this comment. -->
13
 
14
+ # t5-v1_1-base-finetuned-sci_summ
15
 
16
+ This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Train Loss: 2.6709
19
+ - Validation Loss: 2.4722
20
+ - Epoch: 4
21
 
22
  ## Model description
23
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 85205, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
40
  - training_precision: mixed_float16
41
 
42
  ### Training results
43
 
44
  | Train Loss | Validation Loss | Epoch |
45
  |:----------:|:---------------:|:-----:|
46
+ | 3.2489 | 2.6484 | 0 |
47
+ | 2.9313 | 2.5538 | 1 |
48
+ | 2.7882 | 2.5010 | 2 |
49
+ | 2.7125 | 2.4760 | 3 |
50
+ | 2.6709 | 2.4722 | 4 |
51
 
52
 
53
  ### Framework versions
54
 
55
  - Transformers 4.26.1
56
  - TensorFlow 2.11.0
57
+ - Datasets 2.9.0
58
+ - Tokenizers 0.13.2