michal-stefanik
commited on
Commit
•
6c07a31
1
Parent(s):
22a7def
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- generation
|
4 |
+
language:
|
5 |
+
- multilingual
|
6 |
+
- cs
|
7 |
+
- en
|
8 |
+
---
|
9 |
+
|
10 |
+
# Mt5-large for Prime Czech+English Generative Question Answering
|
11 |
+
|
12 |
+
This is the [mt5-base](https://huggingface.co/google/mt5-base) model with an LM head for a generation of extractive answers,
|
13 |
+
given a small set of 2-5 demonstrations (i.e. primes).
|
14 |
+
|
15 |
+
## Priming
|
16 |
+
|
17 |
+
Note that **this is a priming model** that expects a **set of demonstrations** of your task of interest,
|
18 |
+
similarly to GPT-3.
|
19 |
+
Rather than performing well on the conventional question answering, it aims to learn to extrapolate the pattern of given demonstrations
|
20 |
+
to novel tasks, such as Named Entity Recognition or Keywords Extraction from a given pattern.
|
21 |
+
|
22 |
+
## Data & Training
|
23 |
+
|
24 |
+
This model was trained on a combination of [AdversarialQA](https://adversarialqa.github.io)
|
25 |
+
and [Czech SQAD 3.0](https://lindat.cz/repository/xmlui/handle/11234/1-3069)
|
26 |
+
Question Answering datasets.
|
27 |
+
|
28 |
+
To train the model to use the demonstrations, we've **clustered** the samples by the question-word(s)
|
29 |
+
in English AdversarialQA and by the category in the Czech SQAD and used the examples of the same cluster as the demonstrations
|
30 |
+
of the task in training.
|
31 |
+
|
32 |
+
We find that the specific algorithm of selection of these demonstrations makes a big difference in the model's ability to extrapolate
|
33 |
+
to new tasks and will be shared in the following article; stay tuned!
|
34 |
+
|
35 |
+
For the Czech SQAD 3.0, original contexts (=whole Wikipedia websites) were limited to a maximum of 8000 characters
|
36 |
+
per a sequence of prime demonstrations.
|
37 |
+
Pre-processing script for Czech SQAD is available [here](https://huggingface.co/gaussalgo/xlm-roberta-large_extractive-QA_en-cs/blob/main/parse_czech_squad.py).
|
38 |
+
|
39 |
+
|
40 |
+
For training the model (and hence intended also for the inference), we've used the following patterns of 2-7 demonstrations:
|
41 |
+
|
42 |
+
For English samples:
|
43 |
+
|
44 |
+
*input*:
|
45 |
+
```
|
46 |
+
Question: {Q1} Context: {C1} Answer: {A1},
|
47 |
+
Question: {Q2} Context: {C2} Answer: {A2},
|
48 |
+
[...possibly more demonstrations...]
|
49 |
+
|
50 |
+
Question: {Q} Context: {C} Answer:`
|
51 |
+
```
|
52 |
+
=> *target*:
|
53 |
+
```
|
54 |
+
{A}
|
55 |
+
```
|
56 |
+
|
57 |
+
For Czech samples:
|
58 |
+
|
59 |
+
*input*:
|
60 |
+
```
|
61 |
+
Otázka: {Q1} Kontext: {C1} Odpověď: {A1},
|
62 |
+
Otázka: {Q2} Kontext: {C2} Odpověď: {A2},
|
63 |
+
[...possibly more demonstrations...]
|
64 |
+
|
65 |
+
Otázka: {Q} Kontext: {C} Odpověď:`
|
66 |
+
```
|
67 |
+
=> *target*:
|
68 |
+
```
|
69 |
+
{A}
|
70 |
+
```
|
71 |
+
|
72 |
+
|
73 |
+
The best checkpoint was picked to maximize the model's zero-shot performance on unseen Named Entity Recognition
|
74 |
+
from the out-of-distribution domain of texts and labels.
|
75 |
+
|
76 |
+
## Intended uses & limitations
|
77 |
+
|
78 |
+
This model is purposed for a few-shot application on any text extraction task in English and Czech, where the prompt can be stated
|
79 |
+
as a natural question. E.g. to use this model for extracting the entities of customer names from the text,
|
80 |
+
prompt it with demonstrations in the following format:
|
81 |
+
|
82 |
+
```python
|
83 |
+
input_text = """
|
84 |
+
Question: What is the customer's name?
|
85 |
+
Context: Origin: Barrack Obama, Customer id: Bill Moe.
|
86 |
+
Answer: Bill Moe,
|
87 |
+
Question: What is the customer's name?
|
88 |
+
Context: Customer id: Barrack Obama, if not deliverable, return to Bill Clinton.
|
89 |
+
Answer:"""
|
90 |
+
```
|
91 |
+
|
92 |
+
Note that despite its size, English AdversarialQA has a variety of reported biases,
|
93 |
+
conditioned by the relative position or type of the answer in the context that can affect the model's performance on new data
|
94 |
+
(see, e.g. [L. Mikula (2022)](https://is.muni.cz/th/adh58/?lang=en), Chap. 4.1).
|
95 |
+
|
96 |
+
## Usage
|
97 |
+
|
98 |
+
Here is how to use this model to answer the question on a given context using 🤗 Transformers in PyTorch:
|
99 |
+
|
100 |
+
```python
|
101 |
+
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
102 |
+
|
103 |
+
tokenizer = AutoTokenizer.from_pretrained("gaussalgo/mt5-base-priming-QA_en-cs")
|
104 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("gaussalgo/mt5-base-priming-QA_en-cs")
|
105 |
+
|
106 |
+
# For the expected format of input_text, see Intended use above
|
107 |
+
inputs = tokenizer(input_text, return_tensors="pt")
|
108 |
+
|
109 |
+
outputs = model.generate(**inputs)
|
110 |
+
|
111 |
+
print("Answer:")
|
112 |
+
print(tokenizer.decode(outputs))
|
113 |
+
```
|