chibichibi commited on
Commit
f4c7580
1 Parent(s): 90b15b8

Upload TFDistilBertForQuestionAnswering

Browse files
Files changed (3) hide show
  1. README.md +20 -36
  2. config.json +1 -1
  3. tf_model.h5 +2 -2
README.md CHANGED
@@ -1,70 +1,54 @@
1
  ---
2
- datasets:
3
- - squad
4
  license: apache-2.0
5
- tags:
 
6
  - generated_from_keras_callback
7
- metrics:
8
- - f1
9
- model-index:
10
  - name: transformers-qa
11
- results:
12
- - task:
13
- name: "Question Answering"
14
- type: question-answering
15
- dataset:
16
- type: squad
17
- name: SQuAD
18
- args: en
19
- metrics:
20
- []
21
- widget:
22
- - context: "Keras is an API designed for human beings, not machines. Keras follows best practices for reducing cognitive load: it offers consistent & simple APIs, it minimizes the number of user actions required for common use cases, and it provides clear and actionable feedback upon user error."
23
  ---
 
24
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
25
  probably proofread and complete it, then remove this comment. -->
26
 
27
- # Question Answering with Hugging Face Transformers and Keras 🤗❤️
28
 
29
- This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on SQuAD dataset.
30
  It achieves the following results on the evaluation set:
31
- - Train Loss: 0.9300
32
- - Validation Loss: 1.1437
33
- - Epoch: 1
34
 
35
  ## Model description
36
 
37
- Question answering model based on distilbert-base-cased, trained with 🤗Transformers + ❤️Keras.
38
 
39
  ## Intended uses & limitations
40
 
41
- This model is trained for Question Answering tutorial for Keras.io.
42
 
43
  ## Training and evaluation data
44
 
45
- It is trained on [SQuAD](https://huggingface.co/datasets/squad) question answering dataset. ⁉️
46
 
47
  ## Training procedure
48
 
49
- Find the notebook in Keras Examples [here](https://keras.io/examples/nlp/question_answering/). ❤️
50
-
51
  ### Training hyperparameters
52
 
53
  The following hyperparameters were used during training:
54
- - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
55
  - training_precision: mixed_float16
56
 
57
  ### Training results
58
 
59
  | Train Loss | Validation Loss | Epoch |
60
  |:----------:|:---------------:|:-----:|
61
- | 1.5145 | 1.1500 | 0 |
62
- | 0.9300 | 1.1437 | 1 |
63
-
64
 
65
  ### Framework versions
66
 
67
- - Transformers 4.16.0.dev0
68
- - TensorFlow 2.6.0
69
- - Datasets 1.16.2.dev0
70
- - Tokenizers 0.10.3
 
1
  ---
 
 
2
  license: apache-2.0
3
+ base_model: distilbert-base-cased
4
+ tags:
5
  - generated_from_keras_callback
6
+ model-index:
 
 
7
  - name: transformers-qa
8
+ results: []
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
+
11
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
12
  probably proofread and complete it, then remove this comment. -->
13
 
14
+ # transformers-qa
15
 
16
+ This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Train Loss: 1.5435
19
+ - Validation Loss: 1.1638
20
+ - Epoch: 0
21
 
22
  ## Model description
23
 
24
+ More information needed
25
 
26
  ## Intended uses & limitations
27
 
28
+ More information needed
29
 
30
  ## Training and evaluation data
31
 
32
+ More information needed
33
 
34
  ## Training procedure
35
 
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
40
  - training_precision: mixed_float16
41
 
42
  ### Training results
43
 
44
  | Train Loss | Validation Loss | Epoch |
45
  |:----------:|:---------------:|:-----:|
46
+ | 1.5435 | 1.1638 | 0 |
47
+
 
48
 
49
  ### Framework versions
50
 
51
+ - Transformers 4.32.0.dev0
52
+ - TensorFlow 2.12.0
53
+ - Datasets 2.14.4
54
+ - Tokenizers 0.13.3
config.json CHANGED
@@ -19,6 +19,6 @@
19
  "seq_classif_dropout": 0.2,
20
  "sinusoidal_pos_embds": false,
21
  "tie_weights_": true,
22
- "transformers_version": "4.16.0.dev0",
23
  "vocab_size": 28996
24
  }
 
19
  "seq_classif_dropout": 0.2,
20
  "sinusoidal_pos_embds": false,
21
  "tie_weights_": true,
22
+ "transformers_version": "4.32.0.dev0",
23
  "vocab_size": 28996
24
  }
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d9a2797dee03f701b3d03f25de05f9144946933237ec2791e176babc0d288e6a
3
- size 260895816
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b8183fd67b3ab9cbef687a4848c3a9143c2e7b2b63797c8589db0777fed0abc
3
+ size 260895720