File size: 2,407 Bytes
af09cc3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
inference: false
tags:
- onnx
- roberta
- adapter-transformers
datasets:
- quartz
language:
- en
---

# ONNX export of Adapter `AdapterHub/roberta-base-pf-quartz` for roberta-base
## Conversion of [AdapterHub/roberta-base-pf-quartz](https://huggingface.co/AdapterHub/roberta-base-pf-quartz) for UKP SQuARE


## Usage
```python
onnx_path = hf_hub_download(repo_id='UKP-SQuARE/roberta-base-pf-quartz-onnx', filename='model.onnx') # or model_quant.onnx for quantization
onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider'])

context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.'
question = 'What are advantages of ONNX?'
choices = ["Cat", "Horse", "Tiger", "Fish"]tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/roberta-base-pf-quartz-onnx')

raw_input = [[context, question +  + choice] for choice in choices]
inputs = tokenizer(raw_input, padding=True, truncation=True, return_tensors="np")
inputs['token_type_ids'] = np.expand_dims(inputs['token_type_ids'], axis=0)
inputs['input_ids'] =  np.expand_dims(inputs['input_ids'], axis=0)
inputs['attention_mask'] =  np.expand_dims(inputs['attention_mask'], axis=0)
outputs = onnx_model.run(input_feed=dict(inputs), output_names=None)
```

## Architecture & Training

The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).


## Evaluation results

Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.

## Citation

If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):

```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
    title={What to Pre-Train on? Efficient Intermediate Task Selection},
    author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/2104.08247",
    pages = "to appear",
}
```