helenai commited on
Commit
8bfe414
1 Parent(s): 2b96c47

Update model to overflowfix version

Browse files
Files changed (1) hide show
  1. README.md +6 -22
README.md CHANGED
@@ -5,30 +5,16 @@ tags:
5
  datasets:
6
  - squad
7
  model-index:
8
- - name: jpqd_bert_squad_overflowfix
9
  results: []
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
- # jpqd_bert_squad_overflowfix
16
 
17
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
18
-
19
- ## Model description
20
-
21
- More information needed
22
-
23
- ## Intended uses & limitations
24
-
25
- More information needed
26
-
27
- ## Training and evaluation data
28
-
29
- More information needed
30
-
31
- ## Training procedure
32
 
33
  ### Training hyperparameters
34
 
@@ -42,9 +28,6 @@ The following hyperparameters were used during training:
42
  - num_epochs: 8.0
43
  - mixed_precision_training: Native AMP
44
 
45
- ### Training results
46
-
47
-
48
 
49
  ### Framework versions
50
 
@@ -52,3 +35,4 @@ The following hyperparameters were used during training:
52
  - Pytorch 1.13.1+cu117
53
  - Datasets 2.8.0
54
  - Tokenizers 0.13.2
 
 
5
  datasets:
6
  - squad
7
  model-index:
8
+ - name: bert-base-uncased-squad-v1-jpqd-ov-int8
9
  results: []
10
  ---
11
 
12
+ # bert-base-uncased-squad-v1-jpqd-ov-int8
 
 
 
13
 
14
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
15
+ It was compressed using [NNCF](https://github.com/openvinotoolkit/nncf) with [Optimum
16
+ Intel](https://github.com/huggingface/optimum-intel#openvino) following the [JPQD question-answering
17
+ example](https://github.com/huggingface/optimum-intel/tree/main/examples/openvino/question-answering#joint-pruning-quantization-and-distillation-jpqd-for-bert-on-squad10).
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ### Training hyperparameters
20
 
 
28
  - num_epochs: 8.0
29
  - mixed_precision_training: Native AMP
30
 
 
 
 
31
 
32
  ### Framework versions
33
 
 
35
  - Pytorch 1.13.1+cu117
36
  - Datasets 2.8.0
37
  - Tokenizers 0.13.2
38
+