MarcBrun commited on
Commit
bafffc8
1 Parent(s): 3520491

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -6
README.md CHANGED
@@ -21,12 +21,19 @@ The model predicts a span of text from the context and a score for the probabili
21
  The model can be used directly with a *question-answering* pipeline:
22
 
23
  ```python
24
- >>> from transformers import pipeline
25
- >>> context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820"
26
- >>> question = "When was Florence Nightingale born?"
27
- >>> qa = pipeline("question-answering", model="MarcBrun/ixambert-finetuned-squad")
28
- >>> qa(question=question,context=context)
29
- {'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'}
 
 
 
 
 
 
 
30
  ```
31
 
32
  ### Hyperparameters
@@ -36,6 +43,7 @@ batch_size = 8
36
  n_epochs = 3
37
  base_LM_model = "ixambert-base-cased"
38
  learning_rate = 2e-5
 
39
  lr_schedule = linear
40
  max_seq_len = 384
41
  doc_stride = 128
 
21
  The model can be used directly with a *question-answering* pipeline:
22
 
23
  ```python
24
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
25
+
26
+ model_name = "MarcBrun/ixambert-finetuned-squad"
27
+
28
+ # To get predictions
29
+ context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820"
30
+ question = "When was Florence Nightingale born?"
31
+ qa = pipeline("question-answering", model=model_name, tokenizer=model_name)
32
+ pred = qa(question=question,context=context)
33
+
34
+ # To load the model and tokenizer
35
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
36
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
37
  ```
38
 
39
  ### Hyperparameters
 
43
  n_epochs = 3
44
  base_LM_model = "ixambert-base-cased"
45
  learning_rate = 2e-5
46
+ optimizer = AdamW
47
  lr_schedule = linear
48
  max_seq_len = 384
49
  doc_stride = 128