Training in progress epoch 0
Browse files- README.md +14 -19
- config.json +0 -1
- tf_model.h5 +3 -0
README.md
CHANGED
@@ -2,20 +2,22 @@
|
|
2 |
license: apache-2.0
|
3 |
base_model: distilbert-base-uncased
|
4 |
tags:
|
5 |
-
-
|
6 |
model-index:
|
7 |
-
- name: my_awesome_qa_model
|
8 |
results: []
|
9 |
---
|
10 |
|
11 |
-
<!-- This model card has been generated automatically according to the information
|
12 |
-
|
13 |
|
14 |
-
# my_awesome_qa_model
|
15 |
|
16 |
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
-
- Loss:
|
|
|
|
|
19 |
|
20 |
## Model description
|
21 |
|
@@ -34,26 +36,19 @@ More information needed
|
|
34 |
### Training hyperparameters
|
35 |
|
36 |
The following hyperparameters were used during training:
|
37 |
-
- learning_rate: 2e-05
|
38 |
-
-
|
39 |
-
- eval_batch_size: 16
|
40 |
-
- seed: 42
|
41 |
-
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
42 |
-
- lr_scheduler_type: linear
|
43 |
-
- num_epochs: 3
|
44 |
|
45 |
### Training results
|
46 |
|
47 |
-
|
|
48 |
-
|
49 |
-
|
|
50 |
-
| 2.7859 | 2.0 | 500 | 1.9966 |
|
51 |
-
| 2.7859 | 3.0 | 750 | 1.8475 |
|
52 |
|
53 |
|
54 |
### Framework versions
|
55 |
|
56 |
- Transformers 4.37.2
|
57 |
-
-
|
58 |
- Datasets 2.16.1
|
59 |
- Tokenizers 0.15.1
|
|
|
2 |
license: apache-2.0
|
3 |
base_model: distilbert-base-uncased
|
4 |
tags:
|
5 |
+
- generated_from_keras_callback
|
6 |
model-index:
|
7 |
+
- name: jomacgo/my_awesome_qa_model
|
8 |
results: []
|
9 |
---
|
10 |
|
11 |
+
<!-- This model card has been generated automatically according to the information Keras had access to. You should
|
12 |
+
probably proofread and complete it, then remove this comment. -->
|
13 |
|
14 |
+
# jomacgo/my_awesome_qa_model
|
15 |
|
16 |
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
+
- Train Loss: 2.4180
|
19 |
+
- Validation Loss: 0.9110
|
20 |
+
- Epoch: 0
|
21 |
|
22 |
## Model description
|
23 |
|
|
|
36 |
### Training hyperparameters
|
37 |
|
38 |
The following hyperparameters were used during training:
|
39 |
+
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 150, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
|
40 |
+
- training_precision: float32
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
### Training results
|
43 |
|
44 |
+
| Train Loss | Validation Loss | Epoch |
|
45 |
+
|:----------:|:---------------:|:-----:|
|
46 |
+
| 2.4180 | 0.9110 | 0 |
|
|
|
|
|
47 |
|
48 |
|
49 |
### Framework versions
|
50 |
|
51 |
- Transformers 4.37.2
|
52 |
+
- TensorFlow 2.12.0
|
53 |
- Datasets 2.16.1
|
54 |
- Tokenizers 0.15.1
|
config.json
CHANGED
@@ -18,7 +18,6 @@
|
|
18 |
"seq_classif_dropout": 0.2,
|
19 |
"sinusoidal_pos_embds": false,
|
20 |
"tie_weights_": true,
|
21 |
-
"torch_dtype": "float32",
|
22 |
"transformers_version": "4.37.2",
|
23 |
"vocab_size": 30522
|
24 |
}
|
|
|
18 |
"seq_classif_dropout": 0.2,
|
19 |
"sinusoidal_pos_embds": false,
|
20 |
"tie_weights_": true,
|
|
|
21 |
"transformers_version": "4.37.2",
|
22 |
"vocab_size": 30522
|
23 |
}
|
tf_model.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:694e98eaf04c360d563e3360361d80e553527f1fb7cd6d7f48db740176a6a206
|
3 |
+
size 265583592
|