Mit1208 commited on
Commit
942542c
1 Parent(s): a10c794

End of training

Browse files
Files changed (2) hide show
  1. README.md +17 -24
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,11 +1,6 @@
1
  ---
2
  tags:
3
  - generated_from_trainer
4
- metrics:
5
- - precision
6
- - recall
7
- - f1
8
- - accuracy
9
  model-index:
10
  - name: UDOP-finetuned-DocLayNet-3
11
  results: []
@@ -18,11 +13,16 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 1.0136
22
- - Precision: 0.6020
23
- - Recall: 0.5497
24
- - F1: 0.5747
25
- - Accuracy: 0.7782
 
 
 
 
 
26
 
27
  ## Model description
28
 
@@ -41,27 +41,20 @@ More information needed
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
- - learning_rate: 3e-05
45
- - train_batch_size: 8
46
- - eval_batch_size: 8
47
  - seed: 42
 
 
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - lr_scheduler_warmup_ratio: 0.1
51
- - training_steps: 1500
52
-
53
- ### Training results
54
-
55
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
- |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
57
- | 1.0703 | 5.81 | 500 | 0.7495 | 0.6946 | 0.7581 | 0.7249 | 0.8152 |
58
- | 0.2524 | 11.63 | 1000 | 0.8365 | 0.6714 | 0.7688 | 0.7168 | 0.7962 |
59
- | 0.136 | 17.44 | 1500 | 0.7624 | 0.6743 | 0.7903 | 0.7277 | 0.8246 |
60
-
61
 
62
  ### Framework versions
63
 
64
  - Transformers 4.39.0.dev0
65
- - Pytorch 2.1.0+cu121
66
  - Datasets 2.18.0
67
  - Tokenizers 0.15.2
 
1
  ---
2
  tags:
3
  - generated_from_trainer
 
 
 
 
 
4
  model-index:
5
  - name: UDOP-finetuned-DocLayNet-3
6
  results: []
 
13
 
14
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
+ - eval_loss: 0.7407
17
+ - eval_precision: 0.6058
18
+ - eval_recall: 0.5870
19
+ - eval_f1: 0.5962
20
+ - eval_accuracy: 0.7863
21
+ - eval_runtime: 16.2128
22
+ - eval_samples_per_second: 3.886
23
+ - eval_steps_per_second: 1.974
24
+ - epoch: 18.6
25
+ - step: 800
26
 
27
  ## Model description
28
 
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
+ - learning_rate: 2e-05
45
+ - train_batch_size: 2
46
+ - eval_batch_size: 2
47
  - seed: 42
48
+ - gradient_accumulation_steps: 8
49
+ - total_train_batch_size: 16
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
  - lr_scheduler_warmup_ratio: 0.1
53
+ - training_steps: 1000
 
 
 
 
 
 
 
 
 
54
 
55
  ### Framework versions
56
 
57
  - Transformers 4.39.0.dev0
58
+ - Pytorch 2.2.1+cu121
59
  - Datasets 2.18.0
60
  - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:164b372ff88b97d4ba52099d1cfe46a2ac5d646bdea40348017f1d0b0cd5d6cd
3
  size 1355766676
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18232de3292659132c05f65eebfabf492419318e5acc6a70042059621b58b991
3
  size 1355766676