Sonny commited on
Commit
8d4ac4d
1 Parent(s): 884f1d8

commit files to HF hub

Browse files
.gitattributes CHANGED
@@ -1,27 +1,8 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
  *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
  *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
  *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
1
  *.bin.* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
2
  *.lfs.* filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.h5 filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
5
  *.tflite filter=lfs diff=lfs merge=lfs -text
6
+ *.tar.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.ot filter=lfs diff=lfs merge=lfs -text
8
+ *.onnx filter=lfs diff=lfs merge=lfs -text
 
 
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
5
+ tags:
6
+ - question-answering
7
+ license: apache-2.0
8
+ datasets:
9
+ - squad
10
+ metrics:
11
+ - squad
12
+ ---
13
+
14
+ # DistilBERT with a second step of distillation
15
+
16
+ ## Model description
17
+
18
+ This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
19
+
20
+ In this version, the following pre-trained models were used:
21
+
22
+ * Student: `distilbert-base-uncased`
23
+ * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
24
+
25
+ ## Training data
26
+
27
+ This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
28
+
29
+ ```python
30
+ from datasets import load_dataset
31
+ squad = load_dataset('squad')
32
+ ```
33
+
34
+ ## Training procedure
35
+
36
+ ## Eval results
37
+
38
+ | | Exact Match | F1 |
39
+ |------------------|-------------|------|
40
+ | DistilBERT paper | 79.1 | 86.9 |
41
+ | Ours | 78.4 | 86.5 |
42
+
43
+ The scores were calculated using the `squad` metric from `datasets`.
44
+
45
+ ### BibTeX entry and citation info
46
+
47
+ ```bibtex
48
+ @misc{sanh2020distilbert,
49
+ title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
50
+ author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
51
+ year={2020},
52
+ eprint={1910.01108},
53
+ archivePrefix={arXiv},
54
+ primaryClass={cs.CL}
55
+ }
56
+ ```
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24283a1b9a84d3201c5822c359e58d3562ab74e13cfc6ba5c492df07c82ab121
3
+ size 265499080
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "name_or_path": "distilbert-base-uncased"}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed6426209b938175459b2894cca548a10e665caf78af631330d1966cc28d6f44
3
+ size 2095
vocab.txt ADDED
The diff for this file is too large to render. See raw diff