michal-stefanik commited on
Commit
4c1473b
1 Parent(s): aa87bde

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generation
4
+ language:
5
+ - multilingual
6
+ - cs
7
+ - en
8
+ ---
9
+
10
+ # Mt5-base for Prime Czech+English Generative Question Answering
11
+
12
+ This is the [mt5-base](https://huggingface.co/google/mt5-base) model with an LM head for a generation of extractive answers,
13
+ given a small set of 2-5 demonstrations (i.e. primes).
14
+
15
+ ## Priming
16
+
17
+ Note that **this is a priming model** that expects a **set of demonstrations** of your task of interest,
18
+ similarly to GPT-3.
19
+ Rather than performing well on the conventional question answering, it aims to learn to extrapolate the pattern of given demonstrations
20
+ to novel tasks, such as Named Entity Recognition or Keywords Extraction from a given pattern.
21
+
22
+ ## Data & Training
23
+
24
+ This model was trained on a combination of [English SQuAD 1.1](https://huggingface.co/datasets/squad)
25
+ and [Czech SQAD 3.0](https://lindat.cz/repository/xmlui/handle/11234/1-3069)
26
+ Question Answering datasets.
27
+
28
+ To allow the model to rely on a trend given in demonstrations, we've **clustered** the samples by the question-word(s)
29
+ in English SQuAD and by the category in the Czech SQAD and used the examples of the same cluster as the demonstrations
30
+ of the task in training.
31
+
32
+ The specific algorithm of selection of these demonstrations makes a big difference in the model's ability to extrapolate
33
+ to new tasks and will be shared in the following article; stay tuned!
34
+
35
+ For the Czech SQAD 3.0, original contexts (=whole Wikipedia websites) were limited to a maximum of 8000 characters
36
+ per a sequence of prime demonstrations.
37
+ Pre-processing script for Czech SQAD is available [here](https://huggingface.co/gaussalgo/xlm-roberta-large_extractive-QA_en-cs/blob/main/parse_czech_squad.py).
38
+
39
+
40
+ For training the model (and hence intended also for the inference), we've used the following patterns of 2-7 demonstrations:
41
+
42
+ For English samples:
43
+
44
+ *input*:
45
+ ```
46
+ Question: {Q1} Context: {C1} Answer: {A1},
47
+ Question: {Q2} Context: {C2} Answer: {A2},
48
+ [...possibly more demonstrations...]
49
+
50
+ Question: {Q} Context: {C} Answer:`
51
+ ```
52
+ => *target*:
53
+ ```
54
+ {A}
55
+ ```
56
+
57
+ For Czech samples:
58
+
59
+ *input*:
60
+ ```
61
+ Otázka: {Q1} Kontext: {C1} Odpověď: {A1},
62
+ Otázka: {Q2} Kontext: {C2} Odpověď: {A2},
63
+ [...possibly more demonstrations...]
64
+
65
+ Otázka: {Q} Kontext: {C} Odpověď:`
66
+ ```
67
+ => *target*:
68
+ ```
69
+ {A}
70
+ ```
71
+
72
+
73
+ The best checkpoint was picked to maximize the model's zero-shot performance on Named Entity Recognition
74
+ on the out-of-distribution domain of texts and labels.
75
+
76
+ ## Intended uses & limitations
77
+
78
+ This model is purposed for a few-shot application on any text extraction task in English and Czech, where the prompt can be stated
79
+ as a natural question. E.g to use this model for extracting the entities of customer names from the text,
80
+ prompt it with demonstrations in the following format:
81
+
82
+ ```python
83
+ input_text = """
84
+ Question: What is the customer's name? Context: Sender: John Smith, Receiver: Bill Moe. Answer: Bill Moe,
85
+ {possibly more demonstrations here}
86
+ Question: What is the customer's name? Context: Delivery to: Barrack Obama, if not deliverable, deliver to Bill Clinton. Answer:
87
+ """
88
+ ```
89
+
90
+ Note that despite its size, English SQuAD has a variety of reported biases,
91
+ conditioned by the relative position or type of the answer in the context that can affect the model's performance on new data
92
+ (see, e.g. [L. Mikula (2022)](https://is.muni.cz/th/adh58/?lang=en), Chap. 4.1).
93
+
94
+ ## Usage
95
+
96
+ Here is how to use this model to answer the question on a given context using 🤗 Transformers in PyTorch:
97
+
98
+ ```python
99
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
100
+
101
+ tokenizer = AutoTokenizer.from_pretrained("gaussalgo/mt5-base-priming-QA_en-cs")
102
+ model = AutoModelForSeq2SeqLM.from_pretrained("gaussalgo/mt5-base-priming-QA_en-cs")
103
+
104
+ # For the expected format of input_text, see Intended use above
105
+ inputs = tokenizer(input_text, return_tensors="pt")
106
+
107
+ outputs = model.generate(**inputs)
108
+
109
+ print("Answer:")
110
+ print(tokenizer.decode(outputs))
111
+ ```