MarcBrun commited on
Commit
bac1d54
1 Parent(s): 0ed1f87

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - es
5
+ - eu
6
+ datasets:
7
+ - squad
8
+ ---
9
+
10
+ # Description
11
+
12
+ This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on SQuAD version 1.1, that is able to answer basic factual questions in English, Spanish and Basque. It extracts the span of text in which the answer is found.
13
+
14
+
15
+ ### Outputs
16
+
17
+ The model predicts a span of text from the context and a score for the probability for that span to be the correct answer:
18
+
19
+ * Toxic: the tweet has at least some degree of toxicity.
20
+ * Very Toxic: the tweet has a strong degree of toxicity.
21
+
22
+
23
+ ### How to use
24
+
25
+ The model can be used directly with a *question-answering* pipeline:
26
+
27
+ ```python
28
+ >>> from transformers import pipeline
29
+ >>> context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820"
30
+ >>> question = "When was Florence Nightingale born?"
31
+ >>> qa = pipeline("question-answering", model="MarcBrun/ixambert-finetuned-squad")
32
+ >>> qa(question=question,context=context)
33
+ {'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'}
34
+ ```
35
+
36
+ %### Training procedure
37
+ %The pre-trained model was fine-tuned for question answering using the following hyperparameters, which were selected from a validation set:
38
+
39
+ %* Batch size = 32
40
+ %* Learning rate = 2e-5
41
+ %* Epochs = 3
42
+
43
+ %The optimizer used was AdamW and the loss optimized was binary cross-entropy with class weights proportional to the class imbalance.