midav commited on
Commit
d371830
1 Parent(s): a783c1f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ tags:
4
+ - onnx
5
+ - text-classification
6
+ - bert
7
+ - adapterhub:qa/boolq
8
+ - adapter-transformers
9
+ datasets:
10
+ - boolq
11
+ language:
12
+ - en
13
+ ---
14
+
15
+ # ONNX export of Adapter `AdapterHub/bert-base-uncased-pf-boolq` for bert-base-uncased
16
+ ## Conversion of [AdapterHub/bert-base-uncased-pf-boolq](https://huggingface.co/AdapterHub/bert-base-uncased-pf-boolq) for UKP SQuARE
17
+
18
+
19
+ ## Usage
20
+ ```python
21
+ onnx_path = hf_hub_download(repo_id='UKP-SQuARE/bert-base-uncased-pf-boolq-onnx', filename='model.onnx') # or model_quant.onnx for quantization
22
+ onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider'])
23
+
24
+ context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.'
25
+ question = 'What are advantages of ONNX?'
26
+ tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/bert-base-uncased-pf-boolq-onnx')
27
+
28
+ inputs = tokenizer(question, context, padding=True, truncation=True, return_tensors='np')
29
+ inputs_int64 = {key: np.array(inputs[key], dtype=np.int64) for key in inputs}
30
+ outputs = onnx_model.run(input_feed=dict(inputs_int64), output_names=None)
31
+ ```
32
+
33
+ ## Architecture & Training
34
+
35
+ The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
36
+ In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
37
+
38
+
39
+ ## Evaluation results
40
+
41
+ Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
42
+
43
+ ## Citation
44
+
45
+ If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
46
+
47
+ ```bibtex
48
+ @inproceedings{poth-etal-2021-pre,
49
+ title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
50
+ author = {Poth, Clifton and
51
+ Pfeiffer, Jonas and
52
+ R{"u}ckl{'e}, Andreas and
53
+ Gurevych, Iryna},
54
+ booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
55
+ month = nov,
56
+ year = "2021",
57
+ address = "Online and Punta Cana, Dominican Republic",
58
+ publisher = "Association for Computational Linguistics",
59
+ url = "https://aclanthology.org/2021.emnlp-main.827",
60
+ pages = "10585--10605",
61
+ }
62
+ ```