Chung-Fan commited on
Commit
ee205a7
1 Parent(s): 0dfb034

Training done for primera-pubmed-20k

Browse files
Files changed (2) hide show
  1. README.md +6 -5
  2. generation_config.json +1 -1
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  base_model: allenai/PRIMERA
3
  tags:
4
  - generated_from_trainer
@@ -14,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [allenai/PRIMERA](https://huggingface.co/allenai/PRIMERA) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 0.9689
18
 
19
  ## Model description
20
 
@@ -48,12 +49,12 @@ The following hyperparameters were used during training:
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:------:|:----:|:---------------:|
51
- | 1.0637 | 0.7477 | 500 | 0.9689 |
52
 
53
 
54
  ### Framework versions
55
 
56
- - Transformers 4.40.0
57
- - Pytorch 2.2.1+cu121
58
- - Datasets 2.19.0
59
  - Tokenizers 0.19.1
 
1
  ---
2
+ library_name: transformers
3
  base_model: allenai/PRIMERA
4
  tags:
5
  - generated_from_trainer
 
15
 
16
  This model is a fine-tuned version of [allenai/PRIMERA](https://huggingface.co/allenai/PRIMERA) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.9668
19
 
20
  ## Model description
21
 
 
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:------:|:----:|:---------------:|
52
+ | 1.0626 | 0.7477 | 500 | 0.9668 |
53
 
54
 
55
  ### Framework versions
56
 
57
+ - Transformers 4.44.2
58
+ - Pytorch 2.4.1+cu121
59
+ - Datasets 3.0.1
60
  - Tokenizers 0.19.1
generation_config.json CHANGED
@@ -5,5 +5,5 @@
5
  "eos_token_id": 2,
6
  "no_repeat_ngram_size": 3,
7
  "pad_token_id": 1,
8
- "transformers_version": "4.40.0"
9
  }
 
5
  "eos_token_id": 2,
6
  "no_repeat_ngram_size": 3,
7
  "pad_token_id": 1,
8
+ "transformers_version": "4.44.2"
9
  }