Dr. Jorge Abreu Vicente commited on
Commit
189127a
1 Parent(s): 342a822

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -21
README.md CHANGED
@@ -1,17 +1,9 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - biology
5
- - science
6
- - medical
7
- - biomedical
8
- - biocuration
9
- - sourcedata
10
  datasets:
11
  - source_data_nlp
12
- widget:
13
- - text: "XPT of siRNA treated [MASK] cells after 48 hours of knockdown. Treated cells were fed with the indicated amounts of C8L peptid conjugated to iron oxide beads via a disulfide bond. The cells were then exposed to RF33. 70-Luc Reporter [MASK] T cells overnight. Error bars show SD of >3 replicate wells. * p<0.05 for siRNA vs control [MASK] using two-way ANOVA. Representative plot of 3 independent experiments."
14
- - text: "The [MASK] intensity along the line across a lipid droplet in (A) was measured by ImageJ.The lipid droplet localization of [MASK]-[MASK], represented by two peaks, is clearly visible in fat cells from ppl > [MASK] larvae , but it is lost in fat cells from ppl > [MASK] larvae with [MASK] RNAi or overexpression of [MASK]/[MASK]. More than 30 lipid droplets of each genotype were measured. One typical image curve is shown for each genotype."
15
  metrics:
16
  - precision
17
  - recall
@@ -29,16 +21,13 @@ model-index:
29
  metrics:
30
  - name: Precision
31
  type: precision
32
- value: 0.9218777784363701
33
  - name: Recall
34
  type: recall
35
- value: 0.9280386657915151
36
  - name: F1
37
  type: f1
38
- value: 0.9249479631281595
39
- language:
40
- - en
41
- pipeline_tag: token-classification
42
  ---
43
 
44
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -48,11 +37,11 @@ should probably proofread and complete it, then remove this comment. -->
48
 
49
  This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data_nlp dataset.
50
  It achieves the following results on the evaluation set:
51
- - Loss: 0.0141
52
  - Accuracy Score: 0.9950
53
- - Precision: 0.9219
54
- - Recall: 0.9280
55
- - F1: 0.9249
56
 
57
  ## Model description
58
 
@@ -83,7 +72,7 @@ The following hyperparameters were used during training:
83
 
84
  | Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
85
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
86
- | 0.0129 | 1.0 | 1569 | 0.0141 | 0.9950 | 0.9219 | 0.9280 | 0.9249 |
87
 
88
 
89
  ### Framework versions
@@ -91,4 +80,4 @@ The following hyperparameters were used during training:
91
  - Transformers 4.20.0
92
  - Pytorch 1.11.0a0+bfe5ad2
93
  - Datasets 1.17.0
94
- - Tokenizers 0.12.1
1
  ---
2
  license: apache-2.0
3
  tags:
4
+ - generated_from_trainer
 
 
 
 
 
5
  datasets:
6
  - source_data_nlp
 
 
 
7
  metrics:
8
  - precision
9
  - recall
21
  metrics:
22
  - name: Precision
23
  type: precision
24
+ value: 0.9227577212638568
25
  - name: Recall
26
  type: recall
27
+ value: 0.9288143683990692
28
  - name: F1
29
  type: f1
30
+ value: 0.9257761389318425
 
 
 
31
  ---
32
 
33
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
37
 
38
  This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data_nlp dataset.
39
  It achieves the following results on the evaluation set:
40
+ - Loss: 0.0136
41
  - Accuracy Score: 0.9950
42
+ - Precision: 0.9228
43
+ - Recall: 0.9288
44
+ - F1: 0.9258
45
 
46
  ## Model description
47
 
72
 
73
  | Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
74
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
75
+ | 0.014 | 1.0 | 1569 | 0.0136 | 0.9950 | 0.9228 | 0.9288 | 0.9258 |
76
 
77
 
78
  ### Framework versions
80
  - Transformers 4.20.0
81
  - Pytorch 1.11.0a0+bfe5ad2
82
  - Datasets 1.17.0
83
+ - Tokenizers 0.12.1