ocastel commited on
Commit
fb1f066
1 Parent(s): 0bbac5e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -6
README.md CHANGED
@@ -36,7 +36,7 @@ question = 'When was Obama inaugurated?'
36
  text = f'Text: {passage}.\nQuestion: {question}\nAnswer:{tokenizer.additional_special_tokens[0]}.'
37
  encoded_input = tokenizer(text, return_tensors='pt')
38
  output_ids = model.generate(input_ids=encoded_input.input_ids, attention_mask=encoded_input.attention_mask,
39
- eos_token_id=tokenizer.additional_special_tokens_ids[1])
40
  tokenizer.decode(output_ids[0])
41
  ```
42
  The generated answer is then `"<pad><extra_id_0> 2009<extra_id_1>"`, while the one generated by the original [T5-v1.1-large](https://huggingface.co/google/t5-v1_1-large) is `"<pad><extra_id_0> On January 20, 2009<extra_id_1>"` - a correct yet non-extractive answer.
@@ -59,6 +59,22 @@ The gap between the two models diminishes as more training examples are introduc
59
 
60
  ### BibTeX entry and citation info
61
  ```bibtex
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
  @misc{castel2021optimal,
63
  title={How Optimal is Greedy Decoding for Extractive Question Answering?},
64
  author={Or Castel and Ori Ram and Avia Efrat and Omer Levy},
@@ -66,9 +82,6 @@ The gap between the two models diminishes as more training examples are introduc
66
  eprint={2108.05857},
67
  archivePrefix={arXiv},
68
  primaryClass={cs.CL}
69
- }```
70
- <a href="https://huggingface.co/exbert/?model=distilbert-base-uncased">
71
- <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
72
- </a>
73
-
74
 
 
36
  text = f'Text: {passage}.\nQuestion: {question}\nAnswer:{tokenizer.additional_special_tokens[0]}.'
37
  encoded_input = tokenizer(text, return_tensors='pt')
38
  output_ids = model.generate(input_ids=encoded_input.input_ids, attention_mask=encoded_input.attention_mask,
39
+ eos_token_id=tokenizer.additional_special_tokens_ids[1], num_beams=1, max_length=512, min_length=3)
40
  tokenizer.decode(output_ids[0])
41
  ```
42
  The generated answer is then `"<pad><extra_id_0> 2009<extra_id_1>"`, while the one generated by the original [T5-v1.1-large](https://huggingface.co/google/t5-v1_1-large) is `"<pad><extra_id_0> On January 20, 2009<extra_id_1>"` - a correct yet non-extractive answer.
59
 
60
  ### BibTeX entry and citation info
61
  ```bibtex
62
+ @inproceedings{ram-etal-2021-shot,
63
+ title = "Few-Shot Question Answering by Pretraining Span Selection",
64
+ author = "Ram, Ori and
65
+ Kirstain, Yuval and
66
+ Berant, Jonathan and
67
+ Globerson, Amir and
68
+ Levy, Omer",
69
+ booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
70
+ month = aug,
71
+ year = "2021",
72
+ address = "Online",
73
+ publisher = "Association for Computational Linguistics",
74
+ url = "https://aclanthology.org/2021.acl-long.239",
75
+ doi = "10.18653/v1/2021.acl-long.239",
76
+ pages = "3066--3079",
77
+ },
78
  @misc{castel2021optimal,
79
  title={How Optimal is Greedy Decoding for Extractive Question Answering?},
80
  author={Or Castel and Ori Ram and Avia Efrat and Omer Levy},
82
  eprint={2108.05857},
83
  archivePrefix={arXiv},
84
  primaryClass={cs.CL}
85
+ }
 
 
 
 
86
 
87
+ ```