ashishkj23 commited on
Commit
8d05c09
1 Parent(s): d8d5c06

Upload TFDistilBertForQuestionAnswering

Browse files
Files changed (3) hide show
  1. README.md +14 -17
  2. config.json +0 -1
  3. tf_model.h5 +3 -0
README.md CHANGED
@@ -2,20 +2,22 @@
2
  license: apache-2.0
3
  base_model: distilbert-base-uncased
4
  tags:
5
- - generated_from_trainer
6
  model-index:
7
  - name: my_awesome_qa_model
8
  results: []
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
 
14
  # my_awesome_qa_model
15
 
16
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 1.6865
 
 
19
 
20
  ## Model description
21
 
@@ -34,26 +36,21 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 2e-05
38
- - train_batch_size: 16
39
- - eval_batch_size: 16
40
- - seed: 42
41
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
- - lr_scheduler_type: linear
43
- - num_epochs: 3
44
 
45
  ### Training results
46
 
47
- | Training Loss | Epoch | Step | Validation Loss |
48
- |:-------------:|:-----:|:----:|:---------------:|
49
- | No log | 1.0 | 250 | 2.3057 |
50
- | 2.5561 | 2.0 | 500 | 1.7411 |
51
- | 2.5561 | 3.0 | 750 | 1.6865 |
52
 
53
 
54
  ### Framework versions
55
 
56
  - Transformers 4.41.2
57
- - Pytorch 2.3.0+cu121
58
  - Datasets 2.20.0
59
  - Tokenizers 0.19.1
 
2
  license: apache-2.0
3
  base_model: distilbert-base-uncased
4
  tags:
5
+ - generated_from_keras_callback
6
  model-index:
7
  - name: my_awesome_qa_model
8
  results: []
9
  ---
10
 
11
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
12
+ probably proofread and complete it, then remove this comment. -->
13
 
14
  # my_awesome_qa_model
15
 
16
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Train Loss: 1.5477
19
+ - Validation Loss: 1.7788
20
+ - Epoch: 2
21
 
22
  ## Model description
23
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
40
+ - training_precision: float32
 
 
 
 
 
41
 
42
  ### Training results
43
 
44
+ | Train Loss | Validation Loss | Epoch |
45
+ |:----------:|:---------------:|:-----:|
46
+ | 3.4904 | 2.2145 | 0 |
47
+ | 1.8153 | 1.7788 | 1 |
48
+ | 1.5477 | 1.7788 | 2 |
49
 
50
 
51
  ### Framework versions
52
 
53
  - Transformers 4.41.2
54
+ - TensorFlow 2.15.0
55
  - Datasets 2.20.0
56
  - Tokenizers 0.19.1
config.json CHANGED
@@ -18,7 +18,6 @@
18
  "seq_classif_dropout": 0.2,
19
  "sinusoidal_pos_embds": false,
20
  "tie_weights_": true,
21
- "torch_dtype": "float32",
22
  "transformers_version": "4.41.2",
23
  "vocab_size": 30522
24
  }
 
18
  "seq_classif_dropout": 0.2,
19
  "sinusoidal_pos_embds": false,
20
  "tie_weights_": true,
 
21
  "transformers_version": "4.41.2",
22
  "vocab_size": 30522
23
  }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71c21f4de8c72d837449a1025d6ddc61fb6c0df63ae9f91b94200cc7b86d00b8
3
+ size 265583592