Gkumi commited on
Commit
456735f
1 Parent(s): 8173992

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +58 -3
README.md CHANGED
@@ -1,3 +1,58 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - de
4
+ license: apache-2.0
5
+ base_model: distilbert-base-uncased
6
+ metrics:
7
+ - precision
8
+ - recall
9
+ - f1
10
+ - accuracy
11
+ model-index:
12
+ - name: Gkumi/tensorflow-DistilBERT
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # Gkumi/tensorflow-DistilBERT
20
+
21
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - precision: 0.9260
24
+ - recall: 0.9306
25
+ - f1: 0.9283
26
+ - accuracy: 0.9657
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - num_train_epochs: 5
46
+ - train_batch_size: 16
47
+ - eval_batch_size: 32
48
+ - learning_rate: 2e-05
49
+ - weight_decay_rate: 0.01
50
+ - num_warmup_steps: 0
51
+ - fp16: True
52
+
53
+ ### Framework versions
54
+
55
+ - Transformers 4.40.0
56
+ - Pytorch 2.2.2+cu121
57
+ - Datasets 2.18.0
58
+ - Tokenizers 0.19.1