MarcBrun commited on
Commit
5edd065
1 Parent(s): 86a16c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -23,7 +23,7 @@ Nace en los montes Universales, en la sierra de Albarracín, sobre la rama occid
23
 
24
  This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on SQuAD v1.1, that is able to answer basic factual questions in English, Spanish and Basque. This model reaches a F1 score of 89.1 on the SQuAD 1.1 dev set.
25
 
26
- # Overview
27
 
28
  **Language model:** ixambert-base-cased
29
  **Languages:** English, Spanish and Basque
@@ -32,7 +32,7 @@ This is a basic implementation of the multilingual model ["ixambert-base-cased"]
32
  **Eval data:** SQuAD v1.1
33
  **Infrastructure:** 1x GeForce RTX 2080
34
 
35
- ### Outputs
36
 
37
  The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:
38
 
@@ -40,7 +40,7 @@ The model outputs the answer to the question, the start and end positions of the
40
  {'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'}
41
  ```
42
 
43
- ### How to use
44
 
45
  ```python
46
  from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
@@ -58,7 +58,7 @@ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
58
  tokenizer = AutoTokenizer.from_pretrained(model_name)
59
  ```
60
 
61
- ### Hyperparameters
62
 
63
  ```
64
  batch_size = 8
 
23
 
24
  This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on SQuAD v1.1, that is able to answer basic factual questions in English, Spanish and Basque. This model reaches a F1 score of 89.1 on the SQuAD 1.1 dev set.
25
 
26
+ ## Overview
27
 
28
  **Language model:** ixambert-base-cased
29
  **Languages:** English, Spanish and Basque
 
32
  **Eval data:** SQuAD v1.1
33
  **Infrastructure:** 1x GeForce RTX 2080
34
 
35
+ ## Outputs
36
 
37
  The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:
38
 
 
40
  {'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'}
41
  ```
42
 
43
+ ## How to use
44
 
45
  ```python
46
  from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
 
58
  tokenizer = AutoTokenizer.from_pretrained(model_name)
59
  ```
60
 
61
+ ## Hyperparameters
62
 
63
  ```
64
  batch_size = 8