ngchuchi commited on
Commit
1541d9f
1 Parent(s): 807028a

Model save

Browse files
README.md CHANGED
@@ -1,9 +1,8 @@
1
  ---
2
  license: cc-by-4.0
3
- library_name: peft
4
  tags:
5
  - generated_from_trainer
6
- base_model: deepset/roberta-base-squad2
7
  model-index:
8
  - name: roberta-base-squad2-finetuned-BioASQ-ds
9
  results: []
@@ -16,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 1.5794
20
 
21
  ## Model description
22
 
@@ -36,8 +35,8 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 2e-05
39
- - train_batch_size: 16
40
- - eval_batch_size: 16
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
@@ -47,16 +46,15 @@ The following hyperparameters were used during training:
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
- | 2.1583 | 1.0 | 778 | 1.7934 |
51
- | 1.7158 | 2.0 | 1556 | 1.6634 |
52
- | 1.6443 | 3.0 | 2334 | 1.5985 |
53
- | 1.5896 | 4.0 | 3112 | 1.5794 |
54
 
55
 
56
  ### Framework versions
57
 
58
- - PEFT 0.10.0
59
  - Transformers 4.39.3
60
  - Pytorch 2.2.1+cu121
61
  - Datasets 2.18.0
62
- - Tokenizers 0.15.2
 
1
  ---
2
  license: cc-by-4.0
3
+ base_model: deepset/roberta-base-squad2
4
  tags:
5
  - generated_from_trainer
 
6
  model-index:
7
  - name: roberta-base-squad2-finetuned-BioASQ-ds
8
  results: []
 
15
 
16
  This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.8373
19
 
20
  ## Model description
21
 
 
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 2e-05
38
+ - train_batch_size: 8
39
+ - eval_batch_size: 8
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
 
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:----:|:---------------:|
49
+ | 1.0159 | 1.0 | 1556 | 0.9558 |
50
+ | 0.7883 | 2.0 | 3112 | 0.8757 |
51
+ | 0.7006 | 3.0 | 4668 | 0.8293 |
52
+ | 0.6117 | 4.0 | 6224 | 0.8373 |
53
 
54
 
55
  ### Framework versions
56
 
 
57
  - Transformers 4.39.3
58
  - Pytorch 2.2.1+cu121
59
  - Datasets 2.18.0
60
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0e12b342be4db35927c889f9c75309b417992ad57df6919b529d8a2900323e86
3
  size 496250232
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d9de0d900b7357a4092017d29f50fc4e2f3440b0cd3133fd6117c1caeae4345
3
  size 496250232
runs/Apr10_22-46-22_4357093ed208/events.out.tfevents.1712789183.4357093ed208.3659.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dbd5df30a641eac44b07505eb0184b773e04e757b8f2a26621de908a7c09b3ce
3
- size 8385
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b12b4fba4cf646cb19291bb143d6a8463b22d15721c979b8c07064171325b147
3
+ size 9010
runs/Apr10_22-46-22_4357093ed208/events.out.tfevents.1712793517.4357093ed208.3659.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d36688493aad7a981b66085fa7a356f5470357824575d8c14a617a5f4e95cd4e
3
+ size 359