WaRKiD commited on
Commit
c8ce8c2
1 Parent(s): fdab06e

Upload TFBertForQuestionAnswering

Browse files
Files changed (3) hide show
  1. README.md +56 -0
  2. config.json +24 -0
  3. tf_model.h5 +3 -0
README.md CHANGED
@@ -1,3 +1,59 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ base_model: bert-large-uncased-whole-word-masking-finetuned-squad
4
+ tags:
5
+ - generated_from_keras_callback
6
+ model-index:
7
+ - name: bert-large-uncased-whole-word-masking-finetuned-intel-oneapi-llm-dataset
8
+ results: []
9
  ---
10
+
11
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
12
+ probably proofread and complete it, then remove this comment. -->
13
+
14
+ # bert-large-uncased-whole-word-masking-finetuned-intel-oneapi-llm-dataset
15
+
16
+ This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Train Loss: 2.3381
19
+ - Train End Logits Accuracy: 0.4801
20
+ - Train Start Logits Accuracy: 0.4324
21
+ - Validation Loss: 2.1970
22
+ - Validation End Logits Accuracy: 0.5132
23
+ - Validation Start Logits Accuracy: 0.4554
24
+ - Epoch: 1
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 8844, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
44
+ - training_precision: float32
45
+
46
+ ### Training results
47
+
48
+ | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
49
+ |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
50
+ | 2.4656 | 0.4710 | 0.4189 | 2.2246 | 0.5103 | 0.4548 | 0 |
51
+ | 2.3381 | 0.4801 | 0.4324 | 2.1970 | 0.5132 | 0.4554 | 1 |
52
+
53
+
54
+ ### Framework versions
55
+
56
+ - Transformers 4.34.0
57
+ - TensorFlow 2.12.0
58
+ - Datasets 2.14.5
59
+ - Tokenizers 0.14.0
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bert-large-uncased-whole-word-masking-finetuned-squad",
3
+ "architectures": [
4
+ "BertForQuestionAnswering"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "transformers_version": "4.34.0",
21
+ "type_vocab_size": 2,
22
+ "use_cache": true,
23
+ "vocab_size": 30522
24
+ }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f9cfe2644406286e98f901ed2dcc9529d3a242671995fb6fb80d7a377481a00
3
+ size 1336926952