Gkumi commited on
Commit
335fa4e
·
verified ·
1 Parent(s): b80b6ca

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +23 -14
README.md CHANGED
@@ -1,21 +1,29 @@
1
  ---
 
 
2
  license: apache-2.0
3
- tags:
4
- - generated_from_keras_callback
5
  base_model: distilbert-base-uncased
 
 
 
 
 
6
  model-index:
7
- - name: naya-model
8
  results: []
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
12
- probably proofread and complete it, then remove this comment. -->
13
 
14
- # naya-model
15
 
16
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
-
 
 
 
19
 
20
  ## Model description
21
 
@@ -34,16 +42,17 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10875, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
38
- - training_precision: float32
39
-
40
- ### Training results
41
-
42
-
 
43
 
44
  ### Framework versions
45
 
46
  - Transformers 4.40.0
47
- - TensorFlow 2.15.0
48
  - Datasets 2.18.0
49
  - Tokenizers 0.19.1
 
1
  ---
2
+ language:
3
+ - de
4
  license: apache-2.0
 
 
5
  base_model: distilbert-base-uncased
6
+ metrics:
7
+ - precision
8
+ - recall
9
+ - f1
10
+ - accuracy
11
  model-index:
12
+ - name: Gkumi/naya-model
13
  results: []
14
  ---
15
 
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # Gkumi/naya-model
20
 
21
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
+ - precision: 0.9260
24
+ - recall: 0.9306
25
+ - f1: 0.9283
26
+ - accuracy: 0.9657
27
 
28
  ## Model description
29
 
 
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
45
+ - num_train_epochs: 5
46
+ - train_batch_size: 16
47
+ - eval_batch_size: 32
48
+ - learning_rate: 2e-05
49
+ - weight_decay_rate: 0.01
50
+ - num_warmup_steps: 0
51
+ - fp16: True
52
 
53
  ### Framework versions
54
 
55
  - Transformers 4.40.0
56
+ - Pytorch 2.2.2+cu121
57
  - Datasets 2.18.0
58
  - Tokenizers 0.19.1