michal-stefanik commited on
Commit
90f60b4
1 Parent(s): b4f8d67

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - question-answering
4
+ language:
5
+ - multilingual
6
+ - cs
7
+ - en
8
+ ---
9
+
10
+ # Mt5-base for Czech+English Generative Question Answering
11
+
12
+ This is the [mt5-base](https://huggingface.co/google/mt5-base) model with an LM head for a generation of extractive answers. In contrary to our [mt5-base-priming](https://huggingface.co/gaussalgo/mt5-base-priming-QA_en-cs/edit/main/README.md), this is a traditional sequence2sequence model without priming, though can also be used on other Text extraction tasks, such as Named Entity Recognition in zero-shot settings (with a significant decay in quality, compared to priming).
13
+
14
+ ## Intended uses & limitations
15
+
16
+ This model is purposed to *generate* a segment of a given context that contains an answer to a given question (Extractive Question Answering) in English and Czech.
17
+ Given the fine-tuning on two languages and a good reported zero-shot cross-lingual applicability of other fine-tuned multilingual large language models, the model will likely also work on other languages as well, with a specific decay in quality.
18
+
19
+ Note that despite its size, English SQuAD has a variety of reported biases,
20
+ conditioned by the relative position or type of the answer in the context that can affect the model's performance on new data
21
+ (see, e.g. [L. Mikula (2022)](https://is.muni.cz/th/adh58/?lang=en), Chap. 4.1).
22
+
23
+ ## Usage
24
+
25
+ Here is how to use this model to answer the question on a given context using 🤗 Transformers in PyTorch:
26
+
27
+ ```python
28
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
29
+
30
+ tokenizer = AutoTokenizer.from_pretrained("gaussalgo/mt5-base-generative-QA_en-cs")
31
+ model = AutoModelForSeq2SeqLM.from_pretrained("gaussalgo/mt5-base-generative-QA_en-cs")
32
+
33
+ context = """
34
+ Podle slovenského lidového podání byl Juro Jánošík obdařen magickými předměty (kouzelná valaška, čarovný opasek),
35
+ které mu dodávaly nadpřirozené schopnosti. Okrádal především šlechtice,
36
+ trestal panské dráby a ze svého lupu vyděloval část pro chudé, tedy bohatým bral a chudým dával.
37
+ """
38
+ question = "Jaké schopnosti daly magické předměty Juro Jánošíkovi?"
39
+
40
+ inputs = tokenizer(question, context, return_tensors="pt")
41
+ outputs = model.generate(**inputs)
42
+
43
+ print("Answer:")
44
+ print(tokenizer.decode(outputs))
45
+
46
+ ```
47
+
48
+ ## Training
49
+
50
+ The model has been trained using [Adaptor library](https://github.com/gaussalgo/adaptor) v0.1.5, in parallel on both Czech and English data, with the following parameters:
51
+
52
+ ```python
53
+ training_arguments = AdaptationArguments(output_dir="train_dir",
54
+ learning_rate=5e-5,
55
+ stopping_strategy=StoppingStrategy.ALL_OBJECTIVES_CONVERGED,
56
+ do_train=True,
57
+ do_eval=True,
58
+ warmup_steps=1000,
59
+ max_steps=100000,
60
+ gradient_accumulation_steps=4,
61
+ eval_steps=100,
62
+ logging_steps=10,
63
+ save_steps=1000,
64
+ num_train_epochs=50,
65
+ evaluation_strategy="steps",
66
+ remove_unused_columns=False)
67
+
68
+ ```
69
+
70
+ You can find the full training script in [train_mt5_qa_en+cs.py](https://huggingface.co/gaussalgo/mt5-base-generative-QA_en-cs/blob/main/train_mt5_qa_en%2Bcs.py), reproducible after a specific data preprocessing for Czech SQAD in [parse_czech_squad.py](parse_czech_squad.py)