nepp1d0 commited on
Commit
2fd18a5
1 Parent(s): 795e92b

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -14
README.md CHANGED
@@ -1,8 +1,6 @@
1
  ---
2
  tags:
3
  - generated_from_trainer
4
- metrics:
5
- - accuracy
6
  model-index:
7
  - name: SingleBertModel-ProtBertfinetuned-smilesBindingDB
8
  results: []
@@ -13,10 +11,9 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # SingleBertModel-ProtBertfinetuned-smilesBindingDB
15
 
16
- This model is a fine-tuned version of [nepp1d0/SingleBertModel-ProtBertfinetuned-smilesBindingDB](https://huggingface.co/nepp1d0/SingleBertModel-ProtBertfinetuned-smilesBindingDB) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 1.6792
19
- - Accuracy: 0.4893
20
 
21
  ## Model description
22
 
@@ -41,23 +38,26 @@ The following hyperparameters were used during training:
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
- - num_epochs: 5
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
49
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
50
- |:-------------:|:-----:|:-----:|:---------------:|:--------:|
51
- | 1.3289 | 1.0 | 13806 | 1.1720 | 0.5107 |
52
- | 1.2988 | 2.0 | 27612 | 1.3508 | 0.4893 |
53
- | 1.334 | 3.0 | 41418 | 1.4480 | 0.4893 |
54
- | 1.3082 | 4.0 | 55224 | 1.4471 | 0.4893 |
55
- | 1.3746 | 5.0 | 69030 | 1.6792 | 0.4893 |
 
 
 
56
 
57
 
58
  ### Framework versions
59
 
60
  - Transformers 4.18.0
61
- - Pytorch 1.10.0+cu111
62
  - Datasets 2.1.0
63
  - Tokenizers 0.12.1
 
1
  ---
2
  tags:
3
  - generated_from_trainer
 
 
4
  model-index:
5
  - name: SingleBertModel-ProtBertfinetuned-smilesBindingDB
6
  results: []
 
11
 
12
  # SingleBertModel-ProtBertfinetuned-smilesBindingDB
13
 
14
+ This model is a fine-tuned version of [Rostlab/prot_bert](https://huggingface.co/Rostlab/prot_bert) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
+ - Loss: 2.4986
 
17
 
18
  ## Model description
19
 
 
38
  - seed: 42
39
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
  - lr_scheduler_type: linear
41
+ - num_epochs: 8
42
  - mixed_precision_training: Native AMP
43
 
44
  ### Training results
45
 
46
+ | Training Loss | Epoch | Step | Validation Loss |
47
+ |:-------------:|:-----:|:----:|:---------------:|
48
+ | 3.2072 | 1.0 | 100 | 2.6362 |
49
+ | 2.5623 | 2.0 | 200 | 2.5323 |
50
+ | 2.5298 | 3.0 | 300 | 2.5733 |
51
+ | 2.5275 | 4.0 | 400 | 2.5487 |
52
+ | 2.4336 | 5.0 | 500 | 2.5314 |
53
+ | 2.5169 | 6.0 | 600 | 2.5311 |
54
+ | 2.4437 | 7.0 | 700 | 2.3698 |
55
+ | 2.4303 | 8.0 | 800 | 2.3818 |
56
 
57
 
58
  ### Framework versions
59
 
60
  - Transformers 4.18.0
61
+ - Pytorch 1.11.0+cu113
62
  - Datasets 2.1.0
63
  - Tokenizers 0.12.1